2026-01-05 00:00:06.701138 | Job console starting 2026-01-05 00:00:06.726030 | Updating git repos 2026-01-05 00:00:07.214762 | Cloning repos into workspace 2026-01-05 00:00:07.519305 | Restoring repo states 2026-01-05 00:00:07.554373 | Merging changes 2026-01-05 00:00:07.554419 | Checking out repos 2026-01-05 00:00:07.926739 | Preparing playbooks 2026-01-05 00:00:08.992081 | Running Ansible setup 2026-01-05 00:00:17.364296 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-05 00:00:21.563448 | 2026-01-05 00:00:21.563636 | PLAY [Base pre] 2026-01-05 00:00:21.677169 | 2026-01-05 00:00:21.677354 | TASK [Setup log path fact] 2026-01-05 00:00:21.787330 | orchestrator | ok 2026-01-05 00:00:21.961871 | 2026-01-05 00:00:21.962075 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-05 00:00:22.099875 | orchestrator | ok 2026-01-05 00:00:22.224514 | 2026-01-05 00:00:22.224670 | TASK [emit-job-header : Print job information] 2026-01-05 00:00:22.444531 | # Job Information 2026-01-05 00:00:22.445052 | Ansible Version: 2.16.14 2026-01-05 00:00:22.445107 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-05 00:00:22.445152 | Pipeline: periodic-midnight 2026-01-05 00:00:22.445180 | Executor: 521e9411259a 2026-01-05 00:00:22.445595 | Triggered by: https://github.com/osism/testbed 2026-01-05 00:00:22.445665 | Event ID: 9a1e5e94553547229e870b2662f29864 2026-01-05 00:00:22.484902 | 2026-01-05 00:00:22.485067 | LOOP [emit-job-header : Print node information] 2026-01-05 00:00:23.229452 | orchestrator | ok: 2026-01-05 00:00:23.229662 | orchestrator | # Node Information 2026-01-05 00:00:23.229698 | orchestrator | Inventory Hostname: orchestrator 2026-01-05 00:00:23.229724 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-05 00:00:23.229747 | orchestrator | Username: zuul-testbed02 2026-01-05 00:00:23.229769 | orchestrator | Distro: Debian 12.12 2026-01-05 00:00:23.229793 | orchestrator | Provider: static-testbed 2026-01-05 00:00:23.229815 | orchestrator | Region: 2026-01-05 00:00:23.229836 | orchestrator | Label: testbed-orchestrator 2026-01-05 00:00:23.229856 | orchestrator | Product Name: OpenStack Nova 2026-01-05 00:00:23.229875 | orchestrator | Interface IP: 81.163.193.140 2026-01-05 00:00:23.279552 | 2026-01-05 00:00:23.279930 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-05 00:00:26.527366 | orchestrator -> localhost | changed 2026-01-05 00:00:26.543726 | 2026-01-05 00:00:26.549063 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-05 00:00:31.292623 | orchestrator -> localhost | changed 2026-01-05 00:00:31.324698 | 2026-01-05 00:00:31.324806 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-05 00:00:32.187510 | orchestrator -> localhost | ok 2026-01-05 00:00:32.195926 | 2026-01-05 00:00:32.196038 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-05 00:00:32.278045 | orchestrator | ok 2026-01-05 00:00:32.394030 | orchestrator | included: /var/lib/zuul/builds/20c86909422e4296b16c8875f695e972/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-05 00:00:32.466275 | 2026-01-05 00:00:32.466407 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-05 00:00:37.360956 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-05 00:00:37.361124 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/20c86909422e4296b16c8875f695e972/work/20c86909422e4296b16c8875f695e972_id_rsa 2026-01-05 00:00:37.361156 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/20c86909422e4296b16c8875f695e972/work/20c86909422e4296b16c8875f695e972_id_rsa.pub 2026-01-05 00:00:37.361178 | orchestrator -> localhost | The key fingerprint is: 2026-01-05 00:00:37.361200 | orchestrator -> localhost | SHA256:HCo+nvnSI5m9edEJniz0OHAzQ6U67IUkb8GobdRuQ7M zuul-build-sshkey 2026-01-05 00:00:37.361219 | orchestrator -> localhost | The key's randomart image is: 2026-01-05 00:00:37.361248 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-05 00:00:37.361268 | orchestrator -> localhost | | .. | 2026-01-05 00:00:37.361285 | orchestrator -> localhost | | + .. | 2026-01-05 00:00:37.361301 | orchestrator -> localhost | | + B.. . | 2026-01-05 00:00:37.361318 | orchestrator -> localhost | | + B.**o.. | 2026-01-05 00:00:37.361334 | orchestrator -> localhost | |. o E+oOS+ . | 2026-01-05 00:00:37.361354 | orchestrator -> localhost | | . = =+ * o | 2026-01-05 00:00:37.361370 | orchestrator -> localhost | | += o . | 2026-01-05 00:00:37.361398 | orchestrator -> localhost | | .=++.. | 2026-01-05 00:00:37.361416 | orchestrator -> localhost | | ++++ | 2026-01-05 00:00:37.361433 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-05 00:00:37.361473 | orchestrator -> localhost | ok: Runtime: 0:00:02.294111 2026-01-05 00:00:37.367638 | 2026-01-05 00:00:37.367729 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-05 00:00:37.407248 | orchestrator | ok 2026-01-05 00:00:37.431684 | orchestrator | included: /var/lib/zuul/builds/20c86909422e4296b16c8875f695e972/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-05 00:00:37.466522 | 2026-01-05 00:00:37.466623 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-05 00:00:37.514500 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:37.521142 | 2026-01-05 00:00:37.521233 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-05 00:00:38.627774 | orchestrator | changed 2026-01-05 00:00:38.649813 | 2026-01-05 00:00:38.649919 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-05 00:00:39.011810 | orchestrator | ok 2026-01-05 00:00:39.017126 | 2026-01-05 00:00:39.017226 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-05 00:00:39.596575 | orchestrator | ok 2026-01-05 00:00:39.614232 | 2026-01-05 00:00:39.614356 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-05 00:00:40.109598 | orchestrator | ok 2026-01-05 00:00:40.119310 | 2026-01-05 00:00:40.119437 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-05 00:00:40.181641 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:40.188428 | 2026-01-05 00:00:40.188524 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-05 00:00:41.638230 | orchestrator -> localhost | changed 2026-01-05 00:00:41.652234 | 2026-01-05 00:00:41.652330 | TASK [add-build-sshkey : Add back temp key] 2026-01-05 00:00:42.609514 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/20c86909422e4296b16c8875f695e972/work/20c86909422e4296b16c8875f695e972_id_rsa (zuul-build-sshkey) 2026-01-05 00:00:42.609693 | orchestrator -> localhost | ok: Runtime: 0:00:00.043932 2026-01-05 00:00:42.615636 | 2026-01-05 00:00:42.615728 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-05 00:00:43.153276 | orchestrator | ok 2026-01-05 00:00:43.158200 | 2026-01-05 00:00:43.158289 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-05 00:00:43.219454 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:43.371392 | 2026-01-05 00:00:43.371517 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-05 00:00:44.065244 | orchestrator | ok 2026-01-05 00:00:44.082642 | 2026-01-05 00:00:44.082747 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-05 00:00:44.136916 | orchestrator | ok 2026-01-05 00:00:44.146851 | 2026-01-05 00:00:44.146995 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-05 00:00:45.019172 | orchestrator -> localhost | ok 2026-01-05 00:00:45.030501 | 2026-01-05 00:00:45.030593 | TASK [validate-host : Collect information about the host] 2026-01-05 00:00:47.301893 | orchestrator | ok 2026-01-05 00:00:47.341749 | 2026-01-05 00:00:47.341852 | TASK [validate-host : Sanitize hostname] 2026-01-05 00:00:47.424767 | orchestrator | ok 2026-01-05 00:00:47.429641 | 2026-01-05 00:00:47.429728 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-05 00:00:49.986931 | orchestrator -> localhost | changed 2026-01-05 00:00:49.992089 | 2026-01-05 00:00:49.992175 | TASK [validate-host : Collect information about zuul worker] 2026-01-05 00:00:50.771691 | orchestrator | ok 2026-01-05 00:00:50.778769 | 2026-01-05 00:00:50.778982 | TASK [validate-host : Write out all zuul information for each host] 2026-01-05 00:00:52.760687 | orchestrator -> localhost | changed 2026-01-05 00:00:52.785552 | 2026-01-05 00:00:52.785657 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-05 00:00:53.211865 | orchestrator | ok 2026-01-05 00:00:53.217502 | 2026-01-05 00:00:53.217595 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-05 00:02:13.854804 | orchestrator | changed: 2026-01-05 00:02:13.855143 | orchestrator | .d..t...... src/ 2026-01-05 00:02:13.855201 | orchestrator | .d..t...... src/github.com/ 2026-01-05 00:02:13.855241 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-05 00:02:13.855278 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-05 00:02:13.855315 | orchestrator | RedHat.yml 2026-01-05 00:02:13.873040 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-05 00:02:13.873063 | orchestrator | RedHat.yml 2026-01-05 00:02:13.873123 | orchestrator | = 1.53.0"... 2026-01-05 00:02:25.798466 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-05 00:02:25.950231 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-05 00:02:26.469711 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-05 00:02:26.554832 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-05 00:02:26.991001 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-05 00:02:27.058864 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-05 00:02:27.757428 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-05 00:02:27.757538 | orchestrator | 2026-01-05 00:02:27.757556 | orchestrator | Providers are signed by their developers. 2026-01-05 00:02:27.757569 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-05 00:02:27.757582 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-05 00:02:27.757598 | orchestrator | 2026-01-05 00:02:27.757610 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-05 00:02:27.757638 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-05 00:02:27.757650 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-05 00:02:27.757662 | orchestrator | you run "tofu init" in the future. 2026-01-05 00:02:27.757883 | orchestrator | 2026-01-05 00:02:27.757908 | orchestrator | OpenTofu has been successfully initialized! 2026-01-05 00:02:27.757926 | orchestrator | 2026-01-05 00:02:27.757940 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-05 00:02:27.757963 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-05 00:02:27.757976 | orchestrator | should now work. 2026-01-05 00:02:27.757990 | orchestrator | 2026-01-05 00:02:27.758003 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-05 00:02:27.758059 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-05 00:02:27.758075 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-05 00:02:27.940262 | orchestrator | Created and switched to workspace "ci"! 2026-01-05 00:02:27.940370 | orchestrator | 2026-01-05 00:02:27.940378 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-05 00:02:27.940384 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-05 00:02:27.940390 | orchestrator | for this configuration. 2026-01-05 00:02:28.027699 | orchestrator | ci.auto.tfvars 2026-01-05 00:02:28.031001 | orchestrator | default_custom.tf 2026-01-05 00:02:29.081447 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-05 00:02:30.151282 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-05 00:02:30.467959 | orchestrator | 2026-01-05 00:02:30.468041 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-05 00:02:30.468048 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-05 00:02:30.468053 | orchestrator | + create 2026-01-05 00:02:30.468058 | orchestrator | <= read (data resources) 2026-01-05 00:02:30.468063 | orchestrator | 2026-01-05 00:02:30.468067 | orchestrator | OpenTofu will perform the following actions: 2026-01-05 00:02:30.468089 | orchestrator | 2026-01-05 00:02:30.468094 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-05 00:02:30.468098 | orchestrator | # (config refers to values not yet known) 2026-01-05 00:02:30.468103 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-05 00:02:30.468107 | orchestrator | + checksum = (known after apply) 2026-01-05 00:02:30.468111 | orchestrator | + created_at = (known after apply) 2026-01-05 00:02:30.468115 | orchestrator | + file = (known after apply) 2026-01-05 00:02:30.468119 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468144 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468148 | orchestrator | + min_disk_gb = (known after apply) 2026-01-05 00:02:30.468153 | orchestrator | + min_ram_mb = (known after apply) 2026-01-05 00:02:30.468157 | orchestrator | + most_recent = true 2026-01-05 00:02:30.468161 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.468168 | orchestrator | + protected = (known after apply) 2026-01-05 00:02:30.468172 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468180 | orchestrator | + schema = (known after apply) 2026-01-05 00:02:30.468184 | orchestrator | + size_bytes = (known after apply) 2026-01-05 00:02:30.468188 | orchestrator | + tags = (known after apply) 2026-01-05 00:02:30.468192 | orchestrator | + updated_at = (known after apply) 2026-01-05 00:02:30.468196 | orchestrator | } 2026-01-05 00:02:30.468200 | orchestrator | 2026-01-05 00:02:30.468204 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-05 00:02:30.468208 | orchestrator | # (config refers to values not yet known) 2026-01-05 00:02:30.468212 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-05 00:02:30.468216 | orchestrator | + checksum = (known after apply) 2026-01-05 00:02:30.468220 | orchestrator | + created_at = (known after apply) 2026-01-05 00:02:30.468224 | orchestrator | + file = (known after apply) 2026-01-05 00:02:30.468227 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468231 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468235 | orchestrator | + min_disk_gb = (known after apply) 2026-01-05 00:02:30.468239 | orchestrator | + min_ram_mb = (known after apply) 2026-01-05 00:02:30.468243 | orchestrator | + most_recent = true 2026-01-05 00:02:30.468247 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.468251 | orchestrator | + protected = (known after apply) 2026-01-05 00:02:30.468254 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468258 | orchestrator | + schema = (known after apply) 2026-01-05 00:02:30.468262 | orchestrator | + size_bytes = (known after apply) 2026-01-05 00:02:30.468266 | orchestrator | + tags = (known after apply) 2026-01-05 00:02:30.468270 | orchestrator | + updated_at = (known after apply) 2026-01-05 00:02:30.468273 | orchestrator | } 2026-01-05 00:02:30.468280 | orchestrator | 2026-01-05 00:02:30.468284 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-05 00:02:30.468289 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-05 00:02:30.468293 | orchestrator | + content = (known after apply) 2026-01-05 00:02:30.468297 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:30.468301 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:30.468305 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:30.468308 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:30.468312 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:30.468316 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:30.468320 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:30.468324 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:30.468328 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-05 00:02:30.468332 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468336 | orchestrator | } 2026-01-05 00:02:30.468339 | orchestrator | 2026-01-05 00:02:30.468343 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-05 00:02:30.468347 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-05 00:02:30.468351 | orchestrator | + content = (known after apply) 2026-01-05 00:02:30.468355 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:30.468359 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:30.468363 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:30.468367 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:30.468370 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:30.468380 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:30.468384 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:30.468387 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:30.468395 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-05 00:02:30.468399 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468402 | orchestrator | } 2026-01-05 00:02:30.468406 | orchestrator | 2026-01-05 00:02:30.468410 | orchestrator | # local_file.inventory will be created 2026-01-05 00:02:30.468414 | orchestrator | + resource "local_file" "inventory" { 2026-01-05 00:02:30.468418 | orchestrator | + content = (known after apply) 2026-01-05 00:02:30.468422 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:30.468426 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:30.468429 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:30.468433 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:30.468437 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:30.468441 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:30.468445 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:30.468449 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:30.468453 | orchestrator | + filename = "inventory.ci" 2026-01-05 00:02:30.468457 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468460 | orchestrator | } 2026-01-05 00:02:30.468464 | orchestrator | 2026-01-05 00:02:30.468468 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-05 00:02:30.468472 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-05 00:02:30.468476 | orchestrator | + content = (sensitive value) 2026-01-05 00:02:30.468480 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:30.468484 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:30.468487 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:30.468491 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:30.468495 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:30.468499 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:30.468503 | orchestrator | + directory_permission = "0700" 2026-01-05 00:02:30.468507 | orchestrator | + file_permission = "0600" 2026-01-05 00:02:30.468510 | orchestrator | + filename = ".id_rsa.ci" 2026-01-05 00:02:30.468514 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468518 | orchestrator | } 2026-01-05 00:02:30.468522 | orchestrator | 2026-01-05 00:02:30.468526 | orchestrator | # null_resource.node_semaphore will be created 2026-01-05 00:02:30.468530 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-05 00:02:30.468534 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468538 | orchestrator | } 2026-01-05 00:02:30.468545 | orchestrator | 2026-01-05 00:02:30.468549 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-05 00:02:30.468553 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-05 00:02:30.468557 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.468560 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.468565 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468568 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.468572 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468576 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-05 00:02:30.468580 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468584 | orchestrator | + size = 80 2026-01-05 00:02:30.468588 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.468591 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.468595 | orchestrator | } 2026-01-05 00:02:30.468599 | orchestrator | 2026-01-05 00:02:30.468603 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-05 00:02:30.468607 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:30.468611 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.468615 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.468618 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468626 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.468630 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468633 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-05 00:02:30.468637 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468641 | orchestrator | + size = 80 2026-01-05 00:02:30.468645 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.468649 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.468653 | orchestrator | } 2026-01-05 00:02:30.468656 | orchestrator | 2026-01-05 00:02:30.468660 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-05 00:02:30.468664 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:30.468668 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.468672 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.468676 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468679 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.468683 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468687 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-05 00:02:30.468691 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468695 | orchestrator | + size = 80 2026-01-05 00:02:30.468699 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.468702 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.468706 | orchestrator | } 2026-01-05 00:02:30.468710 | orchestrator | 2026-01-05 00:02:30.468714 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-05 00:02:30.468718 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:30.468722 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.468726 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.468729 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468733 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.468737 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468741 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-05 00:02:30.468745 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468749 | orchestrator | + size = 80 2026-01-05 00:02:30.468755 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.468759 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.468763 | orchestrator | } 2026-01-05 00:02:30.468767 | orchestrator | 2026-01-05 00:02:30.468771 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-05 00:02:30.468775 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:30.468779 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.468782 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.468786 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468790 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.468794 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468798 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-05 00:02:30.468802 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468805 | orchestrator | + size = 80 2026-01-05 00:02:30.468809 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.468813 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.468817 | orchestrator | } 2026-01-05 00:02:30.468821 | orchestrator | 2026-01-05 00:02:30.468828 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-05 00:02:30.468832 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:30.468835 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.468839 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.468843 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468850 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.468881 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468886 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-05 00:02:30.468889 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468893 | orchestrator | + size = 80 2026-01-05 00:02:30.468897 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.468901 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.468905 | orchestrator | } 2026-01-05 00:02:30.468909 | orchestrator | 2026-01-05 00:02:30.468913 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-05 00:02:30.468917 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:30.468920 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.468924 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.468928 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468932 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.468936 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468939 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-05 00:02:30.468943 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468947 | orchestrator | + size = 80 2026-01-05 00:02:30.468951 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.468955 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.468958 | orchestrator | } 2026-01-05 00:02:30.468962 | orchestrator | 2026-01-05 00:02:30.468966 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-05 00:02:30.468970 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.468974 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.468978 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.468982 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.468985 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.468989 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-05 00:02:30.468993 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.468997 | orchestrator | + size = 20 2026-01-05 00:02:30.469002 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469008 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469014 | orchestrator | } 2026-01-05 00:02:30.469020 | orchestrator | 2026-01-05 00:02:30.469026 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-05 00:02:30.469033 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.469037 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.469041 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469044 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469048 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.469052 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-05 00:02:30.469056 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469059 | orchestrator | + size = 20 2026-01-05 00:02:30.469063 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469067 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469071 | orchestrator | } 2026-01-05 00:02:30.469074 | orchestrator | 2026-01-05 00:02:30.469078 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-05 00:02:30.469082 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.469086 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.469089 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469093 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469097 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.469101 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-05 00:02:30.469104 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469128 | orchestrator | + size = 20 2026-01-05 00:02:30.469132 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469136 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469139 | orchestrator | } 2026-01-05 00:02:30.469143 | orchestrator | 2026-01-05 00:02:30.469147 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-05 00:02:30.469151 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.469154 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.469158 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469162 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469169 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.469173 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-05 00:02:30.469177 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469180 | orchestrator | + size = 20 2026-01-05 00:02:30.469184 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469188 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469192 | orchestrator | } 2026-01-05 00:02:30.469196 | orchestrator | 2026-01-05 00:02:30.469199 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-05 00:02:30.469203 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.469207 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.469211 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469215 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469218 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.469222 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-05 00:02:30.469226 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469230 | orchestrator | + size = 20 2026-01-05 00:02:30.469234 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469238 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469242 | orchestrator | } 2026-01-05 00:02:30.469249 | orchestrator | 2026-01-05 00:02:30.469253 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-05 00:02:30.469257 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.469261 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.469265 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469269 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469272 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.469276 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-05 00:02:30.469280 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469284 | orchestrator | + size = 20 2026-01-05 00:02:30.469288 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469292 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469295 | orchestrator | } 2026-01-05 00:02:30.469299 | orchestrator | 2026-01-05 00:02:30.469303 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-05 00:02:30.469307 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.469311 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.469315 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469319 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469322 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.469326 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-05 00:02:30.469330 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469334 | orchestrator | + size = 20 2026-01-05 00:02:30.469338 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469342 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469345 | orchestrator | } 2026-01-05 00:02:30.469349 | orchestrator | 2026-01-05 00:02:30.469353 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-05 00:02:30.469357 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.469371 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.469375 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469379 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469383 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.469387 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-05 00:02:30.469391 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469394 | orchestrator | + size = 20 2026-01-05 00:02:30.469398 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469402 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469406 | orchestrator | } 2026-01-05 00:02:30.469410 | orchestrator | 2026-01-05 00:02:30.469414 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-05 00:02:30.469417 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:30.469421 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:30.469425 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469429 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469433 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:30.469437 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-05 00:02:30.469441 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469444 | orchestrator | + size = 20 2026-01-05 00:02:30.469448 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:30.469452 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:30.469456 | orchestrator | } 2026-01-05 00:02:30.469460 | orchestrator | 2026-01-05 00:02:30.469464 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-05 00:02:30.469467 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-05 00:02:30.469471 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:30.469475 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:30.469479 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:30.469483 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.469487 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469491 | orchestrator | + config_drive = true 2026-01-05 00:02:30.469497 | orchestrator | + created = (known after apply) 2026-01-05 00:02:30.469501 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:30.469505 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-05 00:02:30.469509 | orchestrator | + force_delete = false 2026-01-05 00:02:30.469513 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:30.469516 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469520 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.469524 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:30.469528 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:30.469532 | orchestrator | + name = "testbed-manager" 2026-01-05 00:02:30.469536 | orchestrator | + power_state = "active" 2026-01-05 00:02:30.469539 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469543 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:30.469547 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:30.469551 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:30.469555 | orchestrator | + user_data = (sensitive value) 2026-01-05 00:02:30.469559 | orchestrator | 2026-01-05 00:02:30.469563 | orchestrator | + block_device { 2026-01-05 00:02:30.469567 | orchestrator | + boot_index = 0 2026-01-05 00:02:30.469571 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:30.469574 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:30.469578 | orchestrator | + multiattach = false 2026-01-05 00:02:30.469582 | orchestrator | + source_type = "volume" 2026-01-05 00:02:30.469586 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.469593 | orchestrator | } 2026-01-05 00:02:30.469597 | orchestrator | 2026-01-05 00:02:30.469600 | orchestrator | + network { 2026-01-05 00:02:30.469604 | orchestrator | + access_network = false 2026-01-05 00:02:30.469608 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:30.469612 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:30.469616 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:30.469620 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.469623 | orchestrator | + port = (known after apply) 2026-01-05 00:02:30.469627 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.469631 | orchestrator | } 2026-01-05 00:02:30.469635 | orchestrator | } 2026-01-05 00:02:30.469642 | orchestrator | 2026-01-05 00:02:30.469646 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-05 00:02:30.469650 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:30.469653 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:30.469657 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:30.469661 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:30.469665 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.469669 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469673 | orchestrator | + config_drive = true 2026-01-05 00:02:30.469676 | orchestrator | + created = (known after apply) 2026-01-05 00:02:30.469680 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:30.469684 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:30.469688 | orchestrator | + force_delete = false 2026-01-05 00:02:30.469692 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:30.469696 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469700 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.469703 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:30.469707 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:30.469711 | orchestrator | + name = "testbed-node-0" 2026-01-05 00:02:30.469715 | orchestrator | + power_state = "active" 2026-01-05 00:02:30.469719 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469723 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:30.469726 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:30.469730 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:30.469734 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:30.469738 | orchestrator | 2026-01-05 00:02:30.469742 | orchestrator | + block_device { 2026-01-05 00:02:30.469746 | orchestrator | + boot_index = 0 2026-01-05 00:02:30.469749 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:30.469753 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:30.469757 | orchestrator | + multiattach = false 2026-01-05 00:02:30.469761 | orchestrator | + source_type = "volume" 2026-01-05 00:02:30.469765 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.469769 | orchestrator | } 2026-01-05 00:02:30.469773 | orchestrator | 2026-01-05 00:02:30.469776 | orchestrator | + network { 2026-01-05 00:02:30.469780 | orchestrator | + access_network = false 2026-01-05 00:02:30.469784 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:30.469788 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:30.469792 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:30.469795 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.469799 | orchestrator | + port = (known after apply) 2026-01-05 00:02:30.469803 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.469807 | orchestrator | } 2026-01-05 00:02:30.469811 | orchestrator | } 2026-01-05 00:02:30.469814 | orchestrator | 2026-01-05 00:02:30.469818 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-05 00:02:30.469822 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:30.469826 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:30.469833 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:30.469837 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:30.469840 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.469844 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.469848 | orchestrator | + config_drive = true 2026-01-05 00:02:30.469852 | orchestrator | + created = (known after apply) 2026-01-05 00:02:30.469874 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:30.469877 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:30.469881 | orchestrator | + force_delete = false 2026-01-05 00:02:30.469885 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:30.469889 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.469892 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.469896 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:30.469900 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:30.469904 | orchestrator | + name = "testbed-node-1" 2026-01-05 00:02:30.469908 | orchestrator | + power_state = "active" 2026-01-05 00:02:30.469911 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.469915 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:30.469919 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:30.469923 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:30.469929 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:30.469933 | orchestrator | 2026-01-05 00:02:30.469937 | orchestrator | + block_device { 2026-01-05 00:02:30.469943 | orchestrator | + boot_index = 0 2026-01-05 00:02:30.469950 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:30.469957 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:30.469962 | orchestrator | + multiattach = false 2026-01-05 00:02:30.469965 | orchestrator | + source_type = "volume" 2026-01-05 00:02:30.469969 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.469973 | orchestrator | } 2026-01-05 00:02:30.469977 | orchestrator | 2026-01-05 00:02:30.469981 | orchestrator | + network { 2026-01-05 00:02:30.469984 | orchestrator | + access_network = false 2026-01-05 00:02:30.469988 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:30.469992 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:30.469996 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:30.469999 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.470003 | orchestrator | + port = (known after apply) 2026-01-05 00:02:30.470007 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470011 | orchestrator | } 2026-01-05 00:02:30.470043 | orchestrator | } 2026-01-05 00:02:30.470056 | orchestrator | 2026-01-05 00:02:30.470060 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-05 00:02:30.470064 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:30.470067 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:30.470071 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:30.470075 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:30.470079 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.470083 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.470087 | orchestrator | + config_drive = true 2026-01-05 00:02:30.470091 | orchestrator | + created = (known after apply) 2026-01-05 00:02:30.470094 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:30.470098 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:30.470102 | orchestrator | + force_delete = false 2026-01-05 00:02:30.470106 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:30.470110 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470114 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.470122 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:30.470126 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:30.470130 | orchestrator | + name = "testbed-node-2" 2026-01-05 00:02:30.470133 | orchestrator | + power_state = "active" 2026-01-05 00:02:30.470137 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470141 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:30.470145 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:30.470149 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:30.470153 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:30.470156 | orchestrator | 2026-01-05 00:02:30.470160 | orchestrator | + block_device { 2026-01-05 00:02:30.470164 | orchestrator | + boot_index = 0 2026-01-05 00:02:30.470168 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:30.470172 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:30.470176 | orchestrator | + multiattach = false 2026-01-05 00:02:30.470179 | orchestrator | + source_type = "volume" 2026-01-05 00:02:30.470183 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470187 | orchestrator | } 2026-01-05 00:02:30.470191 | orchestrator | 2026-01-05 00:02:30.470195 | orchestrator | + network { 2026-01-05 00:02:30.470199 | orchestrator | + access_network = false 2026-01-05 00:02:30.470203 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:30.470206 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:30.470210 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:30.470214 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.470218 | orchestrator | + port = (known after apply) 2026-01-05 00:02:30.470222 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470226 | orchestrator | } 2026-01-05 00:02:30.470229 | orchestrator | } 2026-01-05 00:02:30.470233 | orchestrator | 2026-01-05 00:02:30.470240 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-05 00:02:30.470244 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:30.470248 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:30.470252 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:30.470255 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:30.470259 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.470263 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.470267 | orchestrator | + config_drive = true 2026-01-05 00:02:30.470271 | orchestrator | + created = (known after apply) 2026-01-05 00:02:30.470274 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:30.470278 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:30.470282 | orchestrator | + force_delete = false 2026-01-05 00:02:30.470286 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:30.470290 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470294 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.470297 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:30.470301 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:30.470305 | orchestrator | + name = "testbed-node-3" 2026-01-05 00:02:30.470309 | orchestrator | + power_state = "active" 2026-01-05 00:02:30.470313 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470317 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:30.470320 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:30.470324 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:30.470328 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:30.470332 | orchestrator | 2026-01-05 00:02:30.470336 | orchestrator | + block_device { 2026-01-05 00:02:30.470340 | orchestrator | + boot_index = 0 2026-01-05 00:02:30.470343 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:30.470347 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:30.470355 | orchestrator | + multiattach = false 2026-01-05 00:02:30.470359 | orchestrator | + source_type = "volume" 2026-01-05 00:02:30.470363 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470367 | orchestrator | } 2026-01-05 00:02:30.470370 | orchestrator | 2026-01-05 00:02:30.470374 | orchestrator | + network { 2026-01-05 00:02:30.470378 | orchestrator | + access_network = false 2026-01-05 00:02:30.470382 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:30.470386 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:30.470390 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:30.470393 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.470397 | orchestrator | + port = (known after apply) 2026-01-05 00:02:30.470401 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470405 | orchestrator | } 2026-01-05 00:02:30.470409 | orchestrator | } 2026-01-05 00:02:30.470413 | orchestrator | 2026-01-05 00:02:30.470417 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-05 00:02:30.470420 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:30.470424 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:30.470428 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:30.470432 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:30.470436 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.470440 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.470444 | orchestrator | + config_drive = true 2026-01-05 00:02:30.470453 | orchestrator | + created = (known after apply) 2026-01-05 00:02:30.470457 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:30.470461 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:30.470465 | orchestrator | + force_delete = false 2026-01-05 00:02:30.470469 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:30.470473 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470477 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.470480 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:30.470484 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:30.470488 | orchestrator | + name = "testbed-node-4" 2026-01-05 00:02:30.470492 | orchestrator | + power_state = "active" 2026-01-05 00:02:30.470496 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470500 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:30.470503 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:30.470507 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:30.470511 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:30.470515 | orchestrator | 2026-01-05 00:02:30.470519 | orchestrator | + block_device { 2026-01-05 00:02:30.470523 | orchestrator | + boot_index = 0 2026-01-05 00:02:30.470526 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:30.470530 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:30.470534 | orchestrator | + multiattach = false 2026-01-05 00:02:30.470538 | orchestrator | + source_type = "volume" 2026-01-05 00:02:30.470542 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470546 | orchestrator | } 2026-01-05 00:02:30.470549 | orchestrator | 2026-01-05 00:02:30.470553 | orchestrator | + network { 2026-01-05 00:02:30.470557 | orchestrator | + access_network = false 2026-01-05 00:02:30.470561 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:30.470565 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:30.470569 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:30.470572 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.470576 | orchestrator | + port = (known after apply) 2026-01-05 00:02:30.470580 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470584 | orchestrator | } 2026-01-05 00:02:30.470588 | orchestrator | } 2026-01-05 00:02:30.470595 | orchestrator | 2026-01-05 00:02:30.470599 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-05 00:02:30.470603 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:30.470607 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:30.470610 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:30.470614 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:30.470618 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.470622 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:30.470626 | orchestrator | + config_drive = true 2026-01-05 00:02:30.470630 | orchestrator | + created = (known after apply) 2026-01-05 00:02:30.470633 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:30.470637 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:30.470641 | orchestrator | + force_delete = false 2026-01-05 00:02:30.470645 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:30.470649 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470653 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:30.470656 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:30.470660 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:30.470664 | orchestrator | + name = "testbed-node-5" 2026-01-05 00:02:30.470668 | orchestrator | + power_state = "active" 2026-01-05 00:02:30.470672 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470675 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:30.470679 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:30.470683 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:30.470687 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:30.470691 | orchestrator | 2026-01-05 00:02:30.470695 | orchestrator | + block_device { 2026-01-05 00:02:30.470699 | orchestrator | + boot_index = 0 2026-01-05 00:02:30.470702 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:30.470706 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:30.470710 | orchestrator | + multiattach = false 2026-01-05 00:02:30.470714 | orchestrator | + source_type = "volume" 2026-01-05 00:02:30.470718 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470722 | orchestrator | } 2026-01-05 00:02:30.470725 | orchestrator | 2026-01-05 00:02:30.470729 | orchestrator | + network { 2026-01-05 00:02:30.470733 | orchestrator | + access_network = false 2026-01-05 00:02:30.470737 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:30.470741 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:30.470745 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:30.470748 | orchestrator | + name = (known after apply) 2026-01-05 00:02:30.470752 | orchestrator | + port = (known after apply) 2026-01-05 00:02:30.470756 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:30.470760 | orchestrator | } 2026-01-05 00:02:30.470764 | orchestrator | } 2026-01-05 00:02:30.470768 | orchestrator | 2026-01-05 00:02:30.470772 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-05 00:02:30.470775 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-05 00:02:30.470779 | orchestrator | + fingerprint = (known after apply) 2026-01-05 00:02:30.470783 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470787 | orchestrator | + name = "testbed" 2026-01-05 00:02:30.470791 | orchestrator | + private_key = (sensitive value) 2026-01-05 00:02:30.470795 | orchestrator | + public_key = (known after apply) 2026-01-05 00:02:30.470798 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470802 | orchestrator | + user_id = (known after apply) 2026-01-05 00:02:30.470806 | orchestrator | } 2026-01-05 00:02:30.470810 | orchestrator | 2026-01-05 00:02:30.470814 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-05 00:02:30.470818 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.470825 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.470828 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470832 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.470836 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470843 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.470847 | orchestrator | } 2026-01-05 00:02:30.470851 | orchestrator | 2026-01-05 00:02:30.470869 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-05 00:02:30.470878 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.470882 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.470886 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470890 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.470894 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470897 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.470901 | orchestrator | } 2026-01-05 00:02:30.470905 | orchestrator | 2026-01-05 00:02:30.470909 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-05 00:02:30.470913 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.470917 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.470920 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470924 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.470928 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470932 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.470935 | orchestrator | } 2026-01-05 00:02:30.470939 | orchestrator | 2026-01-05 00:02:30.470943 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-05 00:02:30.470947 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.470951 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.470955 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470959 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.470962 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.470966 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.470970 | orchestrator | } 2026-01-05 00:02:30.470974 | orchestrator | 2026-01-05 00:02:30.470978 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-05 00:02:30.470982 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.470985 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.470989 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.470993 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.470997 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471000 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.471004 | orchestrator | } 2026-01-05 00:02:30.471008 | orchestrator | 2026-01-05 00:02:30.471012 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-05 00:02:30.471016 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.471020 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.471023 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471027 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.471031 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471035 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.471039 | orchestrator | } 2026-01-05 00:02:30.471042 | orchestrator | 2026-01-05 00:02:30.471046 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-05 00:02:30.471050 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.471054 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.471058 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471061 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.471065 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471072 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.471076 | orchestrator | } 2026-01-05 00:02:30.471080 | orchestrator | 2026-01-05 00:02:30.471084 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-05 00:02:30.471087 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.471091 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.471095 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471099 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.471103 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471107 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.471110 | orchestrator | } 2026-01-05 00:02:30.471114 | orchestrator | 2026-01-05 00:02:30.471118 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-05 00:02:30.471122 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:30.471126 | orchestrator | + device = (known after apply) 2026-01-05 00:02:30.471130 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471133 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:30.471137 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471141 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:30.471145 | orchestrator | } 2026-01-05 00:02:30.471149 | orchestrator | 2026-01-05 00:02:30.471153 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-05 00:02:30.471157 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-05 00:02:30.471161 | orchestrator | + fixed_ip = (known after apply) 2026-01-05 00:02:30.471165 | orchestrator | + floating_ip = (known after apply) 2026-01-05 00:02:30.471169 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471173 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:30.471176 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471180 | orchestrator | } 2026-01-05 00:02:30.471184 | orchestrator | 2026-01-05 00:02:30.471188 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-05 00:02:30.471192 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-05 00:02:30.471196 | orchestrator | + address = (known after apply) 2026-01-05 00:02:30.471199 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.471207 | orchestrator | + dns_domain = (known after apply) 2026-01-05 00:02:30.471213 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:30.471219 | orchestrator | + fixed_ip = (known after apply) 2026-01-05 00:02:30.471227 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471231 | orchestrator | + pool = "public" 2026-01-05 00:02:30.471234 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:30.471238 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471242 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.471246 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.471250 | orchestrator | } 2026-01-05 00:02:30.471254 | orchestrator | 2026-01-05 00:02:30.471257 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-05 00:02:30.471261 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-05 00:02:30.471265 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.471273 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.471277 | orchestrator | + availability_zone_hints = [ 2026-01-05 00:02:30.471281 | orchestrator | + "nova", 2026-01-05 00:02:30.471285 | orchestrator | ] 2026-01-05 00:02:30.471289 | orchestrator | + dns_domain = (known after apply) 2026-01-05 00:02:30.471292 | orchestrator | + external = (known after apply) 2026-01-05 00:02:30.471296 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471300 | orchestrator | + mtu = (known after apply) 2026-01-05 00:02:30.471304 | orchestrator | + name = "net-testbed-management" 2026-01-05 00:02:30.471308 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:30.471315 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:30.471318 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471322 | orchestrator | + shared = (known after apply) 2026-01-05 00:02:30.471326 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.471330 | orchestrator | + transparent_vlan = (known after apply) 2026-01-05 00:02:30.471334 | orchestrator | 2026-01-05 00:02:30.471338 | orchestrator | + segments (known after apply) 2026-01-05 00:02:30.471342 | orchestrator | } 2026-01-05 00:02:30.471346 | orchestrator | 2026-01-05 00:02:30.471350 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-05 00:02:30.471353 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-05 00:02:30.471357 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.471361 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:30.471365 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:30.471369 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.471373 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:30.471376 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:30.471380 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:30.471384 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:30.471388 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471392 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:30.471395 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:30.471399 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:30.471403 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:30.471407 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471411 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:30.471414 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.471418 | orchestrator | 2026-01-05 00:02:30.471422 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471426 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:30.471430 | orchestrator | } 2026-01-05 00:02:30.471434 | orchestrator | 2026-01-05 00:02:30.471438 | orchestrator | + binding (known after apply) 2026-01-05 00:02:30.471442 | orchestrator | 2026-01-05 00:02:30.471446 | orchestrator | + fixed_ip { 2026-01-05 00:02:30.471449 | orchestrator | + ip_address = "192.168.16.5" 2026-01-05 00:02:30.471453 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.471457 | orchestrator | } 2026-01-05 00:02:30.471461 | orchestrator | } 2026-01-05 00:02:30.471465 | orchestrator | 2026-01-05 00:02:30.471469 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-05 00:02:30.471473 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:30.471477 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.471481 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:30.471484 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:30.471488 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.471492 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:30.471496 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:30.471500 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:30.471504 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:30.471508 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471512 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:30.471515 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:30.471519 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:30.471523 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:30.471527 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471534 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:30.471538 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.471542 | orchestrator | 2026-01-05 00:02:30.471545 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471549 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:30.471553 | orchestrator | } 2026-01-05 00:02:30.471557 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471561 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:30.471565 | orchestrator | } 2026-01-05 00:02:30.471569 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471573 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:30.471576 | orchestrator | } 2026-01-05 00:02:30.471580 | orchestrator | 2026-01-05 00:02:30.471584 | orchestrator | + binding (known after apply) 2026-01-05 00:02:30.471588 | orchestrator | 2026-01-05 00:02:30.471592 | orchestrator | + fixed_ip { 2026-01-05 00:02:30.471596 | orchestrator | + ip_address = "192.168.16.10" 2026-01-05 00:02:30.471600 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.471603 | orchestrator | } 2026-01-05 00:02:30.471607 | orchestrator | } 2026-01-05 00:02:30.471611 | orchestrator | 2026-01-05 00:02:30.471615 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-05 00:02:30.471619 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:30.471627 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.471631 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:30.471635 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:30.471639 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.471642 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:30.471646 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:30.471650 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:30.471654 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:30.471658 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471662 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:30.471675 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:30.471679 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:30.471683 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:30.471687 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471691 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:30.471695 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.471698 | orchestrator | 2026-01-05 00:02:30.471702 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471706 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:30.471710 | orchestrator | } 2026-01-05 00:02:30.471714 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471718 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:30.471722 | orchestrator | } 2026-01-05 00:02:30.471726 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471729 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:30.471733 | orchestrator | } 2026-01-05 00:02:30.471737 | orchestrator | 2026-01-05 00:02:30.471741 | orchestrator | + binding (known after apply) 2026-01-05 00:02:30.471745 | orchestrator | 2026-01-05 00:02:30.471749 | orchestrator | + fixed_ip { 2026-01-05 00:02:30.471753 | orchestrator | + ip_address = "192.168.16.11" 2026-01-05 00:02:30.471756 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.471760 | orchestrator | } 2026-01-05 00:02:30.471764 | orchestrator | } 2026-01-05 00:02:30.471768 | orchestrator | 2026-01-05 00:02:30.471772 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-05 00:02:30.471776 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:30.471780 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.471784 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:30.471787 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:30.471791 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.471799 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:30.471803 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:30.471807 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:30.471811 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:30.471815 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471818 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:30.471822 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:30.471826 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:30.471830 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:30.471834 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.471838 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:30.471842 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.471845 | orchestrator | 2026-01-05 00:02:30.471849 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471879 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:30.471887 | orchestrator | } 2026-01-05 00:02:30.471893 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471900 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:30.471904 | orchestrator | } 2026-01-05 00:02:30.471908 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.471912 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:30.471915 | orchestrator | } 2026-01-05 00:02:30.471919 | orchestrator | 2026-01-05 00:02:30.471923 | orchestrator | + binding (known after apply) 2026-01-05 00:02:30.471927 | orchestrator | 2026-01-05 00:02:30.471931 | orchestrator | + fixed_ip { 2026-01-05 00:02:30.471934 | orchestrator | + ip_address = "192.168.16.12" 2026-01-05 00:02:30.471938 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.471942 | orchestrator | } 2026-01-05 00:02:30.471946 | orchestrator | } 2026-01-05 00:02:30.471950 | orchestrator | 2026-01-05 00:02:30.471953 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-05 00:02:30.471957 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:30.471961 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.471965 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:30.471969 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:30.471973 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.471976 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:30.471980 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:30.471984 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:30.471988 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:30.471991 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.471995 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:30.471999 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:30.472003 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:30.472007 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:30.472010 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472014 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:30.472018 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472022 | orchestrator | 2026-01-05 00:02:30.472026 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472029 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:30.472033 | orchestrator | } 2026-01-05 00:02:30.472037 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472041 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:30.472045 | orchestrator | } 2026-01-05 00:02:30.472048 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472052 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:30.472056 | orchestrator | } 2026-01-05 00:02:30.472060 | orchestrator | 2026-01-05 00:02:30.472068 | orchestrator | + binding (known after apply) 2026-01-05 00:02:30.472072 | orchestrator | 2026-01-05 00:02:30.472076 | orchestrator | + fixed_ip { 2026-01-05 00:02:30.472080 | orchestrator | + ip_address = "192.168.16.13" 2026-01-05 00:02:30.472083 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.472087 | orchestrator | } 2026-01-05 00:02:30.472091 | orchestrator | } 2026-01-05 00:02:30.472095 | orchestrator | 2026-01-05 00:02:30.472099 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-05 00:02:30.472102 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:30.472106 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.472110 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:30.472114 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:30.472118 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.472122 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:30.472125 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:30.472129 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:30.472137 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:30.472144 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472148 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:30.472152 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:30.472156 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:30.472159 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:30.472163 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472167 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:30.472171 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472175 | orchestrator | 2026-01-05 00:02:30.472179 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472185 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:30.472189 | orchestrator | } 2026-01-05 00:02:30.472193 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472196 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:30.472200 | orchestrator | } 2026-01-05 00:02:30.472204 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472208 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:30.472212 | orchestrator | } 2026-01-05 00:02:30.472216 | orchestrator | 2026-01-05 00:02:30.472220 | orchestrator | + binding (known after apply) 2026-01-05 00:02:30.472223 | orchestrator | 2026-01-05 00:02:30.472227 | orchestrator | + fixed_ip { 2026-01-05 00:02:30.472231 | orchestrator | + ip_address = "192.168.16.14" 2026-01-05 00:02:30.472235 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.472239 | orchestrator | } 2026-01-05 00:02:30.472243 | orchestrator | } 2026-01-05 00:02:30.472247 | orchestrator | 2026-01-05 00:02:30.472251 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-05 00:02:30.472254 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:30.472258 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.472262 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:30.472266 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:30.472270 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.472274 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:30.472278 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:30.472282 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:30.472285 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:30.472289 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472293 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:30.472328 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:30.472336 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:30.472342 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:30.472350 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472354 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:30.472358 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472361 | orchestrator | 2026-01-05 00:02:30.472365 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472369 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:30.472373 | orchestrator | } 2026-01-05 00:02:30.472377 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472380 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:30.472384 | orchestrator | } 2026-01-05 00:02:30.472388 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:30.472392 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:30.472396 | orchestrator | } 2026-01-05 00:02:30.472399 | orchestrator | 2026-01-05 00:02:30.472404 | orchestrator | + binding (known after apply) 2026-01-05 00:02:30.472410 | orchestrator | 2026-01-05 00:02:30.472417 | orchestrator | + fixed_ip { 2026-01-05 00:02:30.472422 | orchestrator | + ip_address = "192.168.16.15" 2026-01-05 00:02:30.472426 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.472430 | orchestrator | } 2026-01-05 00:02:30.472434 | orchestrator | } 2026-01-05 00:02:30.472437 | orchestrator | 2026-01-05 00:02:30.472441 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-05 00:02:30.472445 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-05 00:02:30.472449 | orchestrator | + force_destroy = false 2026-01-05 00:02:30.472453 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472456 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:30.472460 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472464 | orchestrator | + router_id = (known after apply) 2026-01-05 00:02:30.472468 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:30.472471 | orchestrator | } 2026-01-05 00:02:30.472475 | orchestrator | 2026-01-05 00:02:30.472479 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-05 00:02:30.472483 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-05 00:02:30.472487 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:30.472490 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.472494 | orchestrator | + availability_zone_hints = [ 2026-01-05 00:02:30.472498 | orchestrator | + "nova", 2026-01-05 00:02:30.472502 | orchestrator | ] 2026-01-05 00:02:30.472506 | orchestrator | + distributed = (known after apply) 2026-01-05 00:02:30.472509 | orchestrator | + enable_snat = (known after apply) 2026-01-05 00:02:30.472513 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-05 00:02:30.472517 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-05 00:02:30.472521 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472525 | orchestrator | + name = "testbed" 2026-01-05 00:02:30.472529 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472532 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472536 | orchestrator | 2026-01-05 00:02:30.472540 | orchestrator | + external_fixed_ip (known after apply) 2026-01-05 00:02:30.472544 | orchestrator | } 2026-01-05 00:02:30.472548 | orchestrator | 2026-01-05 00:02:30.472552 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-05 00:02:30.472556 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-05 00:02:30.472560 | orchestrator | + description = "ssh" 2026-01-05 00:02:30.472564 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.472568 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.472572 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472575 | orchestrator | + port_range_max = 22 2026-01-05 00:02:30.472579 | orchestrator | + port_range_min = 22 2026-01-05 00:02:30.472583 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:30.472587 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472595 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.472598 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.472606 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:30.472610 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.472614 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472618 | orchestrator | } 2026-01-05 00:02:30.472622 | orchestrator | 2026-01-05 00:02:30.472626 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-05 00:02:30.472629 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-05 00:02:30.472633 | orchestrator | + description = "wireguard" 2026-01-05 00:02:30.472637 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.472641 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.472645 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472649 | orchestrator | + port_range_max = 51820 2026-01-05 00:02:30.472653 | orchestrator | + port_range_min = 51820 2026-01-05 00:02:30.472656 | orchestrator | + protocol = "udp" 2026-01-05 00:02:30.472660 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472664 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.472668 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.472672 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:30.472675 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.472680 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472683 | orchestrator | } 2026-01-05 00:02:30.472687 | orchestrator | 2026-01-05 00:02:30.472691 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-05 00:02:30.472695 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-05 00:02:30.472701 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.472705 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.472709 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472713 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:30.472717 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472721 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.472725 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.472729 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-05 00:02:30.472732 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.472736 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472740 | orchestrator | } 2026-01-05 00:02:30.472744 | orchestrator | 2026-01-05 00:02:30.472748 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-05 00:02:30.472752 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-05 00:02:30.472756 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.472759 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.472763 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472767 | orchestrator | + protocol = "udp" 2026-01-05 00:02:30.472771 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472775 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.472779 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.472782 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-05 00:02:30.472786 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.472790 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472794 | orchestrator | } 2026-01-05 00:02:30.472798 | orchestrator | 2026-01-05 00:02:30.472801 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-05 00:02:30.472809 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-05 00:02:30.472813 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.472816 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.472820 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472824 | orchestrator | + protocol = "icmp" 2026-01-05 00:02:30.472828 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472832 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.472836 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.472840 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:30.472843 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.472847 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472851 | orchestrator | } 2026-01-05 00:02:30.472869 | orchestrator | 2026-01-05 00:02:30.472873 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-05 00:02:30.472877 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-05 00:02:30.472881 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.472885 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.472888 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472892 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:30.472896 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472900 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.472904 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.472908 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:30.472911 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.472915 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472919 | orchestrator | } 2026-01-05 00:02:30.472923 | orchestrator | 2026-01-05 00:02:30.472927 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-05 00:02:30.472931 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-05 00:02:30.472935 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.472938 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.472942 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.472946 | orchestrator | + protocol = "udp" 2026-01-05 00:02:30.472950 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.472959 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.472963 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.472967 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:30.472970 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.472974 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.472978 | orchestrator | } 2026-01-05 00:02:30.472982 | orchestrator | 2026-01-05 00:02:30.472986 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-05 00:02:30.472990 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-05 00:02:30.472993 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.472997 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.473001 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.473005 | orchestrator | + protocol = "icmp" 2026-01-05 00:02:30.473009 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.473013 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.473016 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.473020 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:30.473024 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.473028 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.473035 | orchestrator | } 2026-01-05 00:02:30.473039 | orchestrator | 2026-01-05 00:02:30.473043 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-05 00:02:30.473047 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-05 00:02:30.473051 | orchestrator | + description = "vrrp" 2026-01-05 00:02:30.473055 | orchestrator | + direction = "ingress" 2026-01-05 00:02:30.473059 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:30.473063 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.473066 | orchestrator | + protocol = "112" 2026-01-05 00:02:30.473070 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.473074 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:30.473078 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:30.473082 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:30.473086 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:30.473090 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.473093 | orchestrator | } 2026-01-05 00:02:30.473097 | orchestrator | 2026-01-05 00:02:30.473101 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-05 00:02:30.473105 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-05 00:02:30.473109 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.473113 | orchestrator | + description = "management security group" 2026-01-05 00:02:30.473117 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.473121 | orchestrator | + name = "testbed-management" 2026-01-05 00:02:30.473124 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.473128 | orchestrator | + stateful = (known after apply) 2026-01-05 00:02:30.473132 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.473136 | orchestrator | } 2026-01-05 00:02:30.473140 | orchestrator | 2026-01-05 00:02:30.473144 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-05 00:02:30.473148 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-05 00:02:30.473151 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.473155 | orchestrator | + description = "node security group" 2026-01-05 00:02:30.473159 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.473163 | orchestrator | + name = "testbed-node" 2026-01-05 00:02:30.473167 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.473171 | orchestrator | + stateful = (known after apply) 2026-01-05 00:02:30.473175 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.473178 | orchestrator | } 2026-01-05 00:02:30.473182 | orchestrator | 2026-01-05 00:02:30.473186 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-05 00:02:30.473190 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-05 00:02:30.473194 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:30.473198 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-05 00:02:30.473202 | orchestrator | + dns_nameservers = [ 2026-01-05 00:02:30.473205 | orchestrator | + "8.8.8.8", 2026-01-05 00:02:30.473209 | orchestrator | + "9.9.9.9", 2026-01-05 00:02:30.473213 | orchestrator | ] 2026-01-05 00:02:30.473217 | orchestrator | + enable_dhcp = true 2026-01-05 00:02:30.473221 | orchestrator | + gateway_ip = (known after apply) 2026-01-05 00:02:30.473228 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.473232 | orchestrator | + ip_version = 4 2026-01-05 00:02:30.473236 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-05 00:02:30.473240 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-05 00:02:30.473244 | orchestrator | + name = "subnet-testbed-management" 2026-01-05 00:02:30.473248 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:30.473251 | orchestrator | + no_gateway = false 2026-01-05 00:02:30.473255 | orchestrator | + region = (known after apply) 2026-01-05 00:02:30.473259 | orchestrator | + service_types = (known after apply) 2026-01-05 00:02:30.473267 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:30.473271 | orchestrator | 2026-01-05 00:02:30.473275 | orchestrator | + allocation_pool { 2026-01-05 00:02:30.473279 | orchestrator | + end = "192.168.31.250" 2026-01-05 00:02:30.473282 | orchestrator | + start = "192.168.31.200" 2026-01-05 00:02:30.473286 | orchestrator | } 2026-01-05 00:02:30.473290 | orchestrator | } 2026-01-05 00:02:30.473294 | orchestrator | 2026-01-05 00:02:30.473298 | orchestrator | # terraform_data.image will be created 2026-01-05 00:02:30.473302 | orchestrator | + resource "terraform_data" "image" { 2026-01-05 00:02:30.473305 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.473309 | orchestrator | + input = "Ubuntu 24.04" 2026-01-05 00:02:30.473313 | orchestrator | + output = (known after apply) 2026-01-05 00:02:30.473317 | orchestrator | } 2026-01-05 00:02:30.473321 | orchestrator | 2026-01-05 00:02:30.473325 | orchestrator | # terraform_data.image_node will be created 2026-01-05 00:02:30.473329 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-05 00:02:30.473333 | orchestrator | + id = (known after apply) 2026-01-05 00:02:30.473336 | orchestrator | + input = "Ubuntu 24.04" 2026-01-05 00:02:30.473340 | orchestrator | + output = (known after apply) 2026-01-05 00:02:30.473344 | orchestrator | } 2026-01-05 00:02:30.473348 | orchestrator | 2026-01-05 00:02:30.473352 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-05 00:02:30.473356 | orchestrator | 2026-01-05 00:02:30.473360 | orchestrator | Changes to Outputs: 2026-01-05 00:02:30.473364 | orchestrator | + manager_address = (sensitive value) 2026-01-05 00:02:30.473367 | orchestrator | + private_key = (sensitive value) 2026-01-05 00:02:30.571646 | orchestrator | terraform_data.image: Creating... 2026-01-05 00:02:30.574007 | orchestrator | terraform_data.image: Creation complete after 0s [id=7ad75017-43e5-ac7f-9f29-f7e3e8e51299] 2026-01-05 00:02:30.692421 | orchestrator | terraform_data.image_node: Creating... 2026-01-05 00:02:30.693698 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=ab678240-142f-66c1-350f-be693d66a740] 2026-01-05 00:02:30.710822 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-05 00:02:30.710886 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-05 00:02:30.726422 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-05 00:02:30.728721 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-05 00:02:30.730603 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-05 00:02:30.738597 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-05 00:02:30.739254 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-05 00:02:30.748657 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-05 00:02:30.749499 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-05 00:02:30.751791 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-05 00:02:31.189843 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-05 00:02:31.205649 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-05 00:02:31.205725 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-05 00:02:31.218363 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-05 00:02:31.408917 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-01-05 00:02:31.414417 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-05 00:02:32.559846 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=3987cfe8-0716-47af-bd92-0b177c7c66a7] 2026-01-05 00:02:36.487944 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-05 00:02:36.488020 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=75b952fb-92c7-4e92-8330-2435d2a1b678] 2026-01-05 00:02:36.488113 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=92e48aa1-1628-4f72-a210-d4a4cd9ae613] 2026-01-05 00:02:36.488125 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-05 00:02:36.488194 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-05 00:02:36.488948 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=eef1532f-ab8b-4fa5-967d-60adcf1e7a20] 2026-01-05 00:02:36.488979 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-05 00:02:36.488990 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=200a53d9-75f2-4262-8bd3-fb85b57756f2] 2026-01-05 00:02:36.489000 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-05 00:02:36.489010 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=67556553-b44f-4ecf-b7ec-7000501d4421] 2026-01-05 00:02:36.489021 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-05 00:02:36.489032 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1] 2026-01-05 00:02:36.489041 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-05 00:02:36.489051 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=7205c955-31fc-4c08-90c1-5dd24967146a] 2026-01-05 00:02:36.489061 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-05 00:02:36.489071 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=f5154273-2305-4d04-879f-ade05dd05763] 2026-01-05 00:02:36.489081 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-05 00:02:36.489090 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=5b0c3dba-a10f-46cc-b603-e7d957ac37b8] 2026-01-05 00:02:36.489101 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-05 00:02:36.489111 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=ecea09cf-ce8c-40b5-ae83-1ba3809d84dc] 2026-01-05 00:02:36.489120 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-05 00:02:36.489130 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=c261807d-3a16-40e0-a09f-b7ca1d73873b] 2026-01-05 00:02:36.500212 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 1s [id=aefd2d81d00cb1a6f4cc27ce820dea8b7854f29b] 2026-01-05 00:02:36.500329 | orchestrator | local_file.id_rsa_pub: Creation complete after 1s [id=f2fb70816b32548bc344e31279730654a431bfc9] 2026-01-05 00:02:37.800509 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=3eca2ed8-76b6-4450-b1e2-293aa71f7a07] 2026-01-05 00:02:37.811050 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=177d06a0-8f03-40a5-8e33-2819c29e72ab] 2026-01-05 00:02:37.881616 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c] 2026-01-05 00:02:37.883150 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=64a819d6-a948-46bf-979e-0c123b5ffe57] 2026-01-05 00:02:37.939996 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=be547545-78c3-41b9-a375-def0ee26b80a] 2026-01-05 00:02:37.995827 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=1b353439-4187-4ebd-95db-cc3328d916f6] 2026-01-05 00:02:39.047827 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=d8c5b14a-3c94-4567-a2df-4d26b96e6321] 2026-01-05 00:02:39.047939 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-05 00:02:39.047954 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-05 00:02:39.047966 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-05 00:02:39.047978 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=808cc88d-437d-4519-ad23-eb25816aead2] 2026-01-05 00:02:39.047990 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-05 00:02:39.048002 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-05 00:02:39.048013 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-05 00:02:39.048048 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-05 00:02:39.048060 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-05 00:02:39.048071 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-05 00:02:39.048085 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=5302c087-e27e-4cab-b455-082011b14b4c] 2026-01-05 00:02:39.048105 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=f75795c7-ad18-4f75-b3f3-220f95a54c19] 2026-01-05 00:02:39.048142 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-05 00:02:39.048162 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-05 00:02:39.048182 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-05 00:02:39.048200 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-05 00:02:39.195170 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=863c7c21-9a01-42e3-bd32-2251bdaaf174] 2026-01-05 00:02:39.204147 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-05 00:02:39.230118 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=adda1cf8-0f05-4156-98e9-5383714c9132] 2026-01-05 00:02:39.237180 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-05 00:02:39.419270 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=12e8ff8a-0666-4168-8d84-402ef8ccae4b] 2026-01-05 00:02:39.429134 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-05 00:02:39.503037 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=9e42573a-f5c5-4ec2-abe9-7aae4dc8fb04] 2026-01-05 00:02:39.511314 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-05 00:02:39.577430 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=d5cf22a9-4160-4d4a-8b28-8e54284ddb34] 2026-01-05 00:02:39.585457 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-05 00:02:39.747267 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=253f4b3d-9840-42b2-abc3-b4471d6c728e] 2026-01-05 00:02:39.752900 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-05 00:02:39.808827 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=2910cef0-d894-456b-bf45-06b07ce480cb] 2026-01-05 00:02:39.913128 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=290b43c1-2c78-4238-bbc0-aa2100d27fb1] 2026-01-05 00:02:40.010609 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=658a1017-e324-4164-b7cc-5f9232a381fd] 2026-01-05 00:02:40.074119 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=ee6ceed0-0b63-4968-aa08-81b0bf3cbd0a] 2026-01-05 00:02:40.161753 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=acf82194-f986-44c0-a6db-8ac5bb9b6749] 2026-01-05 00:02:40.174960 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=d78f08ff-60c1-4dcc-b118-8be55ee82192] 2026-01-05 00:02:40.236470 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=363d9464-f1a5-4a43-a3ca-8174aa8d239d] 2026-01-05 00:02:40.534699 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=7bf0abae-4308-4f0c-9d87-b7062be11317] 2026-01-05 00:02:40.585336 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=647c05fb-dc7f-46e1-b4e5-d1173ba134d6] 2026-01-05 00:02:41.657065 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=89c4fbe4-78b7-45f0-82c3-4c0c88d16d0a] 2026-01-05 00:02:41.679102 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-05 00:02:41.682288 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-05 00:02:41.697474 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-05 00:02:41.697651 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-05 00:02:41.702638 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-05 00:02:41.707549 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-05 00:02:41.708746 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-05 00:02:44.222834 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=779a4eeb-b6cd-40af-8046-07177e414d65] 2026-01-05 00:02:44.231444 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-05 00:02:44.240506 | orchestrator | local_file.inventory: Creating... 2026-01-05 00:02:44.240835 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-05 00:02:44.508792 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 1s [id=0d48400afbb800a916337dfdc92696a95bbfc42f] 2026-01-05 00:02:44.508936 | orchestrator | local_file.inventory: Creation complete after 1s [id=0e4a337a1d006e4615d82bab413f4c7b3b989e5a] 2026-01-05 00:02:45.113198 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=779a4eeb-b6cd-40af-8046-07177e414d65] 2026-01-05 00:02:51.686267 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-05 00:02:51.698410 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-05 00:02:51.698484 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-05 00:02:51.703564 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-05 00:02:51.714293 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-05 00:02:51.714395 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-05 00:03:01.688211 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-05 00:03:01.699428 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-05 00:03:01.699627 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-05 00:03:01.703710 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-05 00:03:01.715107 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-05 00:03:01.715372 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-05 00:03:11.697770 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-05 00:03:11.700169 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-05 00:03:11.700268 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-05 00:03:11.704496 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-05 00:03:11.715989 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-05 00:03:11.716096 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-05 00:03:12.493359 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=70776153-3b6f-4736-97c0-28ddd9e7d1e9] 2026-01-05 00:03:12.564653 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=9ba9d829-1eb7-422c-b72e-cfe99f5a2fde] 2026-01-05 00:03:21.706783 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-05 00:03:21.706941 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-05 00:03:21.706954 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-05 00:03:21.717185 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-05 00:03:22.488599 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 40s [id=e940e688-1eba-4550-8b00-a47bdda1745e] 2026-01-05 00:03:22.558745 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=80c75030-0f1f-4433-8781-9a7d2844f7d7] 2026-01-05 00:03:22.580569 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=abb81e57-fd86-481a-af45-3a676647be92] 2026-01-05 00:03:23.299761 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=fa7eab97-d981-45ac-92e4-b0480e733d3b] 2026-01-05 00:03:23.320787 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-05 00:03:23.324846 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=9160655924805249159] 2026-01-05 00:03:23.328389 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-05 00:03:23.328481 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-05 00:03:23.330573 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-05 00:03:23.340356 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-05 00:03:23.341987 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-05 00:03:23.349883 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-05 00:03:23.350754 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-05 00:03:23.362311 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-05 00:03:23.370563 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-05 00:03:23.376435 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-05 00:03:26.810724 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=e940e688-1eba-4550-8b00-a47bdda1745e/5b0c3dba-a10f-46cc-b603-e7d957ac37b8] 2026-01-05 00:03:26.817162 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=9ba9d829-1eb7-422c-b72e-cfe99f5a2fde/f5154273-2305-4d04-879f-ade05dd05763] 2026-01-05 00:03:26.847311 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=fa7eab97-d981-45ac-92e4-b0480e733d3b/7205c955-31fc-4c08-90c1-5dd24967146a] 2026-01-05 00:03:32.917346 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=9ba9d829-1eb7-422c-b72e-cfe99f5a2fde/92e48aa1-1628-4f72-a210-d4a4cd9ae613] 2026-01-05 00:03:32.964117 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=e940e688-1eba-4550-8b00-a47bdda1745e/200a53d9-75f2-4262-8bd3-fb85b57756f2] 2026-01-05 00:03:32.971066 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=fa7eab97-d981-45ac-92e4-b0480e733d3b/75b952fb-92c7-4e92-8330-2435d2a1b678] 2026-01-05 00:03:32.984179 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=9ba9d829-1eb7-422c-b72e-cfe99f5a2fde/a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1] 2026-01-05 00:03:33.003547 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=fa7eab97-d981-45ac-92e4-b0480e733d3b/67556553-b44f-4ecf-b7ec-7000501d4421] 2026-01-05 00:03:33.012384 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=e940e688-1eba-4550-8b00-a47bdda1745e/eef1532f-ab8b-4fa5-967d-60adcf1e7a20] 2026-01-05 00:03:33.377524 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-05 00:03:43.377949 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-05 00:03:43.757112 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=0a324d52-ec02-4f47-b32e-078ebb417f52] 2026-01-05 00:03:43.779490 | orchestrator | 2026-01-05 00:03:43.779665 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-05 00:03:43.779685 | orchestrator | 2026-01-05 00:03:43.779697 | orchestrator | Outputs: 2026-01-05 00:03:43.779708 | orchestrator | 2026-01-05 00:03:43.779720 | orchestrator | manager_address = 2026-01-05 00:03:43.779733 | orchestrator | private_key = 2026-01-05 00:03:44.115459 | orchestrator | ok: Runtime: 0:01:18.237635 2026-01-05 00:03:44.150486 | 2026-01-05 00:03:44.150632 | TASK [Create infrastructure (stable)] 2026-01-05 00:03:44.685317 | orchestrator | skipping: Conditional result was False 2026-01-05 00:03:44.705714 | 2026-01-05 00:03:44.705911 | TASK [Fetch manager address] 2026-01-05 00:03:45.222988 | orchestrator | ok 2026-01-05 00:03:45.230977 | 2026-01-05 00:03:45.231110 | TASK [Set manager_host address] 2026-01-05 00:03:45.335502 | orchestrator | ok 2026-01-05 00:03:45.345962 | 2026-01-05 00:03:45.346140 | LOOP [Update ansible collections] 2026-01-05 00:03:46.690063 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:03:46.690493 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-05 00:03:46.690571 | orchestrator | Starting galaxy collection install process 2026-01-05 00:03:46.690615 | orchestrator | Process install dependency map 2026-01-05 00:03:46.690655 | orchestrator | Starting collection install process 2026-01-05 00:03:46.690687 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-01-05 00:03:46.690731 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-01-05 00:03:46.690802 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-05 00:03:46.690916 | orchestrator | ok: Item: commons Runtime: 0:00:00.915673 2026-01-05 00:03:47.967795 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:03:47.968031 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-05 00:03:47.968111 | orchestrator | Starting galaxy collection install process 2026-01-05 00:03:47.968172 | orchestrator | Process install dependency map 2026-01-05 00:03:47.968227 | orchestrator | Starting collection install process 2026-01-05 00:03:47.968278 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-01-05 00:03:47.968330 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-01-05 00:03:47.968413 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-05 00:03:47.968496 | orchestrator | ok: Item: services Runtime: 0:00:00.897550 2026-01-05 00:03:47.993973 | 2026-01-05 00:03:47.994225 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-05 00:03:58.682881 | orchestrator | ok 2026-01-05 00:03:58.692185 | 2026-01-05 00:03:58.692312 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-05 00:04:58.725510 | orchestrator | ok 2026-01-05 00:04:58.733082 | 2026-01-05 00:04:58.733208 | TASK [Fetch manager ssh hostkey] 2026-01-05 00:05:00.311330 | orchestrator | Output suppressed because no_log was given 2026-01-05 00:05:00.330943 | 2026-01-05 00:05:00.331149 | TASK [Get ssh keypair from terraform environment] 2026-01-05 00:05:00.870496 | orchestrator | ok: Runtime: 0:00:00.007901 2026-01-05 00:05:00.892090 | 2026-01-05 00:05:00.892697 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-05 00:05:00.937664 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-05 00:05:00.946987 | 2026-01-05 00:05:00.947131 | TASK [Run manager part 0] 2026-01-05 00:05:02.139442 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:05:02.196182 | orchestrator | 2026-01-05 00:05:02.196257 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-05 00:05:02.196272 | orchestrator | 2026-01-05 00:05:02.196292 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-05 00:05:04.207046 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:04.207147 | orchestrator | 2026-01-05 00:05:04.207180 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-05 00:05:04.207195 | orchestrator | 2026-01-05 00:05:04.207208 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:05:06.258201 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:06.258288 | orchestrator | 2026-01-05 00:05:06.258300 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-05 00:05:06.966892 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:06.966988 | orchestrator | 2026-01-05 00:05:06.967002 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-05 00:05:07.023412 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.023467 | orchestrator | 2026-01-05 00:05:07.023477 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-05 00:05:07.062253 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.062324 | orchestrator | 2026-01-05 00:05:07.062338 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-05 00:05:07.106947 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.107012 | orchestrator | 2026-01-05 00:05:07.107021 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-05 00:05:07.146951 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.147076 | orchestrator | 2026-01-05 00:05:07.147085 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-05 00:05:07.196282 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.196359 | orchestrator | 2026-01-05 00:05:07.196373 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-05 00:05:07.241884 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.241952 | orchestrator | 2026-01-05 00:05:07.241960 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-05 00:05:07.294169 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.294227 | orchestrator | 2026-01-05 00:05:07.294238 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-05 00:05:08.085886 | orchestrator | changed: [testbed-manager] 2026-01-05 00:05:08.085946 | orchestrator | 2026-01-05 00:05:08.085956 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-05 00:07:55.478678 | orchestrator | changed: [testbed-manager] 2026-01-05 00:07:55.478764 | orchestrator | 2026-01-05 00:07:55.478779 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-05 00:09:34.038369 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:34.038473 | orchestrator | 2026-01-05 00:09:34.038493 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-05 00:09:57.283272 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:57.283327 | orchestrator | 2026-01-05 00:09:57.283338 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-05 00:10:07.027282 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:07.027629 | orchestrator | 2026-01-05 00:10:07.027656 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-05 00:10:07.079498 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:07.079542 | orchestrator | 2026-01-05 00:10:07.079550 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-05 00:10:07.918925 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:07.919014 | orchestrator | 2026-01-05 00:10:07.919030 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-05 00:10:08.700080 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:08.700155 | orchestrator | 2026-01-05 00:10:08.700167 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-05 00:10:15.336980 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:15.337080 | orchestrator | 2026-01-05 00:10:15.337145 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-05 00:10:21.804482 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:21.804562 | orchestrator | 2026-01-05 00:10:21.804576 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-05 00:10:24.673291 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:24.673390 | orchestrator | 2026-01-05 00:10:24.673405 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-05 00:10:26.650171 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:26.650266 | orchestrator | 2026-01-05 00:10:26.650280 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-05 00:10:27.808549 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-05 00:10:27.808656 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-05 00:10:27.808672 | orchestrator | 2026-01-05 00:10:27.808685 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-05 00:10:27.852989 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-05 00:10:27.853072 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-05 00:10:27.853086 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-05 00:10:27.853099 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-05 00:10:31.562706 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-05 00:10:31.562801 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-05 00:10:31.562816 | orchestrator | 2026-01-05 00:10:31.562829 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-05 00:10:32.180826 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:32.180874 | orchestrator | 2026-01-05 00:10:32.180883 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-05 00:10:51.527754 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-05 00:10:51.527865 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-05 00:10:51.527883 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-05 00:10:51.527896 | orchestrator | 2026-01-05 00:10:51.527909 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-05 00:10:54.038985 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-05 00:10:54.039113 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-05 00:10:54.039129 | orchestrator | 2026-01-05 00:10:54.039141 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-05 00:10:54.039154 | orchestrator | 2026-01-05 00:10:54.039166 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:10:55.570249 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:55.570370 | orchestrator | 2026-01-05 00:10:55.570389 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-05 00:10:55.620221 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:55.620297 | orchestrator | 2026-01-05 00:10:55.620306 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-05 00:10:55.687297 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:55.687393 | orchestrator | 2026-01-05 00:10:55.687408 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-05 00:10:56.520669 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:56.520721 | orchestrator | 2026-01-05 00:10:56.520731 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-05 00:10:57.323650 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:57.323765 | orchestrator | 2026-01-05 00:10:57.323791 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-05 00:10:58.873433 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-05 00:10:58.873537 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-05 00:10:58.873553 | orchestrator | 2026-01-05 00:10:58.873587 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-05 00:11:00.373015 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:00.373132 | orchestrator | 2026-01-05 00:11:00.373141 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-05 00:11:02.249233 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:11:02.249527 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-05 00:11:02.249542 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:11:02.249551 | orchestrator | 2026-01-05 00:11:02.249562 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-05 00:11:02.318192 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:02.318300 | orchestrator | 2026-01-05 00:11:02.318318 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-05 00:11:02.386737 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:02.386843 | orchestrator | 2026-01-05 00:11:02.386862 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-05 00:11:02.996555 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:02.996610 | orchestrator | 2026-01-05 00:11:02.996619 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-05 00:11:03.088617 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:03.088669 | orchestrator | 2026-01-05 00:11:03.088680 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-05 00:11:03.987220 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:11:03.987268 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:03.987275 | orchestrator | 2026-01-05 00:11:03.987281 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-05 00:11:04.029877 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:04.030061 | orchestrator | 2026-01-05 00:11:04.030070 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-05 00:11:04.078680 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:04.078731 | orchestrator | 2026-01-05 00:11:04.078740 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-05 00:11:04.124961 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:04.125004 | orchestrator | 2026-01-05 00:11:04.125013 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-05 00:11:04.207967 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:04.208009 | orchestrator | 2026-01-05 00:11:04.208016 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-05 00:11:05.015583 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:05.015633 | orchestrator | 2026-01-05 00:11:05.015643 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-05 00:11:05.015650 | orchestrator | 2026-01-05 00:11:05.015657 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:11:06.485748 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:06.485777 | orchestrator | 2026-01-05 00:11:06.485783 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-05 00:11:07.559037 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:07.559083 | orchestrator | 2026-01-05 00:11:07.559090 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:11:07.559096 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-05 00:11:07.559101 | orchestrator | 2026-01-05 00:11:07.744351 | orchestrator | ok: Runtime: 0:06:06.316452 2026-01-05 00:11:07.754051 | 2026-01-05 00:11:07.754179 | TASK [Point out that the log in on the manager is now possible] 2026-01-05 00:11:07.803562 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-05 00:11:07.813943 | 2026-01-05 00:11:07.814106 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-05 00:11:07.850190 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-05 00:11:07.861946 | 2026-01-05 00:11:07.862105 | TASK [Run manager part 1 + 2] 2026-01-05 00:11:08.700717 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:11:08.748476 | orchestrator | 2026-01-05 00:11:08.748531 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-05 00:11:08.748541 | orchestrator | 2026-01-05 00:11:08.748557 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:11:11.400191 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:11.400966 | orchestrator | 2026-01-05 00:11:11.401000 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-05 00:11:11.443629 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:11.443688 | orchestrator | 2026-01-05 00:11:11.443698 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-05 00:11:11.487755 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:11.487807 | orchestrator | 2026-01-05 00:11:11.487816 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:11:11.522919 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:11.522974 | orchestrator | 2026-01-05 00:11:11.522983 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:11:11.594637 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:11.594718 | orchestrator | 2026-01-05 00:11:11.594734 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:11:11.659191 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:11.659255 | orchestrator | 2026-01-05 00:11:11.659266 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:11:11.704205 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-05 00:11:11.704261 | orchestrator | 2026-01-05 00:11:11.704268 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:11:12.515587 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:12.515650 | orchestrator | 2026-01-05 00:11:12.515659 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:11:12.560410 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:12.560460 | orchestrator | 2026-01-05 00:11:12.560467 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:11:14.082492 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:14.082569 | orchestrator | 2026-01-05 00:11:14.082583 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:11:14.740421 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:14.740481 | orchestrator | 2026-01-05 00:11:14.740489 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:11:16.051401 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:16.051475 | orchestrator | 2026-01-05 00:11:16.051487 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:11:32.903644 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:32.903743 | orchestrator | 2026-01-05 00:11:32.903761 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-05 00:11:33.622319 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:33.755667 | orchestrator | 2026-01-05 00:11:33.755730 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-05 00:11:33.785774 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:33.785866 | orchestrator | 2026-01-05 00:11:33.785883 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-05 00:11:34.678597 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:34.678683 | orchestrator | 2026-01-05 00:11:34.678698 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-05 00:11:35.696481 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:35.696587 | orchestrator | 2026-01-05 00:11:35.696611 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-05 00:11:36.351557 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:36.351600 | orchestrator | 2026-01-05 00:11:36.351607 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-05 00:11:36.388652 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-05 00:11:36.388776 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-05 00:11:36.388793 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-05 00:11:36.388806 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-05 00:11:38.895335 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:38.895423 | orchestrator | 2026-01-05 00:11:38.895440 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-05 00:11:48.021566 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-05 00:11:48.021745 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-05 00:11:48.021759 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-05 00:11:48.021767 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-05 00:11:48.021780 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-05 00:11:48.021788 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-05 00:11:48.021795 | orchestrator | 2026-01-05 00:11:48.021802 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-05 00:11:49.104406 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:49.104505 | orchestrator | 2026-01-05 00:11:49.104524 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-05 00:11:49.145245 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:49.145344 | orchestrator | 2026-01-05 00:11:49.145368 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-05 00:11:52.411337 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:52.411559 | orchestrator | 2026-01-05 00:11:52.411579 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-05 00:11:52.449002 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:52.449108 | orchestrator | 2026-01-05 00:11:52.449130 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-05 00:13:40.131375 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:40.132763 | orchestrator | 2026-01-05 00:13:40.132798 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:13:41.332463 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:41.332533 | orchestrator | 2026-01-05 00:13:41.332554 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:13:41.332571 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-05 00:13:41.332642 | orchestrator | 2026-01-05 00:13:41.520628 | orchestrator | ok: Runtime: 0:02:33.233889 2026-01-05 00:13:41.538019 | 2026-01-05 00:13:41.538173 | TASK [Reboot manager] 2026-01-05 00:13:43.084088 | orchestrator | ok: Runtime: 0:00:00.973934 2026-01-05 00:13:43.100442 | 2026-01-05 00:13:43.100626 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-05 00:14:01.222153 | orchestrator | ok 2026-01-05 00:14:01.231466 | 2026-01-05 00:14:01.231598 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-05 00:15:01.276903 | orchestrator | ok 2026-01-05 00:15:01.288307 | 2026-01-05 00:15:01.288480 | TASK [Deploy manager + bootstrap nodes] 2026-01-05 00:15:04.099932 | orchestrator | 2026-01-05 00:15:04.100108 | orchestrator | # DEPLOY MANAGER 2026-01-05 00:15:04.100132 | orchestrator | 2026-01-05 00:15:04.100147 | orchestrator | + set -e 2026-01-05 00:15:04.100161 | orchestrator | + echo 2026-01-05 00:15:04.100175 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-05 00:15:04.100193 | orchestrator | + echo 2026-01-05 00:15:04.100239 | orchestrator | + cat /opt/manager-vars.sh 2026-01-05 00:15:04.104153 | orchestrator | export NUMBER_OF_NODES=6 2026-01-05 00:15:04.104207 | orchestrator | 2026-01-05 00:15:04.104220 | orchestrator | export CEPH_VERSION=reef 2026-01-05 00:15:04.104234 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-05 00:15:04.104246 | orchestrator | export MANAGER_VERSION=latest 2026-01-05 00:15:04.104271 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-01-05 00:15:04.104283 | orchestrator | 2026-01-05 00:15:04.104301 | orchestrator | export ARA=false 2026-01-05 00:15:04.104313 | orchestrator | export DEPLOY_MODE=manager 2026-01-05 00:15:04.104330 | orchestrator | export TEMPEST=true 2026-01-05 00:15:04.104342 | orchestrator | export IS_ZUUL=true 2026-01-05 00:15:04.104353 | orchestrator | 2026-01-05 00:15:04.104372 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.38 2026-01-05 00:15:04.104383 | orchestrator | export EXTERNAL_API=false 2026-01-05 00:15:04.104395 | orchestrator | 2026-01-05 00:15:04.104406 | orchestrator | export IMAGE_USER=ubuntu 2026-01-05 00:15:04.104442 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-05 00:15:04.104455 | orchestrator | 2026-01-05 00:15:04.104466 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-05 00:15:04.104485 | orchestrator | 2026-01-05 00:15:04.104497 | orchestrator | + echo 2026-01-05 00:15:04.104511 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:15:04.105575 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:15:04.105603 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:15:04.105615 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:15:04.105628 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:15:04.105825 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:15:04.105867 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:15:04.105879 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:15:04.105987 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:15:04.106003 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:15:04.106057 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:15:04.106069 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:15:04.106080 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-05 00:15:04.106092 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-05 00:15:04.106103 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-05 00:15:04.106146 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-05 00:15:04.106157 | orchestrator | ++ export ARA=false 2026-01-05 00:15:04.106169 | orchestrator | ++ ARA=false 2026-01-05 00:15:04.106191 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:15:04.106203 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:15:04.106214 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:15:04.106225 | orchestrator | ++ TEMPEST=true 2026-01-05 00:15:04.106236 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:15:04.106247 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:15:04.106258 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.38 2026-01-05 00:15:04.106269 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.38 2026-01-05 00:15:04.106280 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:15:04.106291 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:15:04.106302 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:15:04.106313 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:15:04.106324 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:15:04.106335 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:15:04.106346 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:15:04.106357 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:15:04.106368 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-05 00:15:04.167820 | orchestrator | + docker version 2026-01-05 00:15:04.467206 | orchestrator | Client: Docker Engine - Community 2026-01-05 00:15:04.467263 | orchestrator | Version: 27.5.1 2026-01-05 00:15:04.467270 | orchestrator | API version: 1.47 2026-01-05 00:15:04.467275 | orchestrator | Go version: go1.22.11 2026-01-05 00:15:04.467279 | orchestrator | Git commit: 9f9e405 2026-01-05 00:15:04.467284 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-05 00:15:04.467289 | orchestrator | OS/Arch: linux/amd64 2026-01-05 00:15:04.467293 | orchestrator | Context: default 2026-01-05 00:15:04.467297 | orchestrator | 2026-01-05 00:15:04.467307 | orchestrator | Server: Docker Engine - Community 2026-01-05 00:15:04.467311 | orchestrator | Engine: 2026-01-05 00:15:04.467315 | orchestrator | Version: 27.5.1 2026-01-05 00:15:04.467320 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-05 00:15:04.467340 | orchestrator | Go version: go1.22.11 2026-01-05 00:15:04.467344 | orchestrator | Git commit: 4c9b3b0 2026-01-05 00:15:04.467403 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-05 00:15:04.467410 | orchestrator | OS/Arch: linux/amd64 2026-01-05 00:15:04.467414 | orchestrator | Experimental: false 2026-01-05 00:15:04.467418 | orchestrator | containerd: 2026-01-05 00:15:04.467559 | orchestrator | Version: v2.2.1 2026-01-05 00:15:04.467566 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-05 00:15:04.467571 | orchestrator | runc: 2026-01-05 00:15:04.467575 | orchestrator | Version: 1.3.4 2026-01-05 00:15:04.467578 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-05 00:15:04.467582 | orchestrator | docker-init: 2026-01-05 00:15:04.467973 | orchestrator | Version: 0.19.0 2026-01-05 00:15:04.467980 | orchestrator | GitCommit: de40ad0 2026-01-05 00:15:04.472714 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-05 00:15:04.480853 | orchestrator | + set -e 2026-01-05 00:15:04.480884 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:15:04.480890 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:15:04.480896 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:15:04.480900 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:15:04.480905 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:15:04.480910 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:15:04.480915 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:15:04.480919 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-05 00:15:04.480924 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-05 00:15:04.480928 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-05 00:15:04.480933 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-05 00:15:04.480946 | orchestrator | ++ export ARA=false 2026-01-05 00:15:04.480951 | orchestrator | ++ ARA=false 2026-01-05 00:15:04.480956 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:15:04.480961 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:15:04.480965 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:15:04.480970 | orchestrator | ++ TEMPEST=true 2026-01-05 00:15:04.480974 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:15:04.480978 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:15:04.480983 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.38 2026-01-05 00:15:04.480987 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.38 2026-01-05 00:15:04.480992 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:15:04.480996 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:15:04.481001 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:15:04.481005 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:15:04.481010 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:15:04.481014 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:15:04.481019 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:15:04.481023 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:15:04.481028 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:15:04.481032 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:15:04.481037 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:15:04.481041 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:15:04.481048 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:15:04.481054 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-05 00:15:04.481058 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:15:04.481063 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-05 00:15:04.491577 | orchestrator | + set -e 2026-01-05 00:15:04.491610 | orchestrator | + VERSION=reef 2026-01-05 00:15:04.492667 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:15:04.497539 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-05 00:15:04.497568 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:15:04.503811 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-01-05 00:15:04.511281 | orchestrator | + set -e 2026-01-05 00:15:04.511299 | orchestrator | + VERSION=2025.1 2026-01-05 00:15:04.512167 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:15:04.515446 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-05 00:15:04.515459 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:15:04.521679 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-05 00:15:04.522788 | orchestrator | ++ semver latest 7.0.0 2026-01-05 00:15:04.601197 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:15:04.601279 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:15:04.601294 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-05 00:15:04.601992 | orchestrator | ++ semver latest 10.0.0-0 2026-01-05 00:15:04.663842 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:15:04.664302 | orchestrator | ++ semver 2025.1 2025.1 2026-01-05 00:15:04.756744 | orchestrator | + [[ 0 -ge 0 ]] 2026-01-05 00:15:04.756808 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-05 00:15:04.764244 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-05 00:15:04.769286 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-05 00:15:04.874665 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:15:04.876564 | orchestrator | + source /opt/venv/bin/activate 2026-01-05 00:15:04.877876 | orchestrator | ++ deactivate nondestructive 2026-01-05 00:15:04.877910 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:15:04.877928 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:15:04.877940 | orchestrator | ++ hash -r 2026-01-05 00:15:04.878089 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:15:04.878116 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-05 00:15:04.878134 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-05 00:15:04.878147 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-05 00:15:04.878459 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-05 00:15:04.878493 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-05 00:15:04.878505 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-05 00:15:04.878516 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-05 00:15:04.878533 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:15:04.878570 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:15:04.878583 | orchestrator | ++ export PATH 2026-01-05 00:15:04.878598 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:15:04.878693 | orchestrator | ++ '[' -z '' ']' 2026-01-05 00:15:04.878721 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-05 00:15:04.878738 | orchestrator | ++ PS1='(venv) ' 2026-01-05 00:15:04.878750 | orchestrator | ++ export PS1 2026-01-05 00:15:04.878772 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-05 00:15:04.878785 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-05 00:15:04.878942 | orchestrator | ++ hash -r 2026-01-05 00:15:04.879067 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-05 00:15:06.435528 | orchestrator | 2026-01-05 00:15:06.435646 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-05 00:15:06.435674 | orchestrator | 2026-01-05 00:15:06.435695 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:15:07.080936 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:07.081056 | orchestrator | 2026-01-05 00:15:07.081075 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-05 00:15:08.114314 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:08.114386 | orchestrator | 2026-01-05 00:15:08.114396 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-05 00:15:08.114404 | orchestrator | 2026-01-05 00:15:08.114410 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:15:10.784206 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:10.784282 | orchestrator | 2026-01-05 00:15:10.784294 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-05 00:15:10.845630 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:10.845712 | orchestrator | 2026-01-05 00:15:10.845725 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-05 00:15:11.326831 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:11.326949 | orchestrator | 2026-01-05 00:15:11.326968 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-05 00:15:11.369440 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:11.369527 | orchestrator | 2026-01-05 00:15:11.369542 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-05 00:15:11.712531 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:11.712637 | orchestrator | 2026-01-05 00:15:11.712653 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-05 00:15:11.762123 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:11.762213 | orchestrator | 2026-01-05 00:15:11.762228 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-05 00:15:12.123910 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:12.124063 | orchestrator | 2026-01-05 00:15:12.124093 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-05 00:15:12.251480 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:12.251590 | orchestrator | 2026-01-05 00:15:12.251605 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-05 00:15:12.251617 | orchestrator | 2026-01-05 00:15:12.251628 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:15:14.976242 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:14.976370 | orchestrator | 2026-01-05 00:15:14.976388 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-05 00:15:15.067260 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-05 00:15:15.067367 | orchestrator | 2026-01-05 00:15:15.067380 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-05 00:15:15.119045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-05 00:15:15.119140 | orchestrator | 2026-01-05 00:15:15.119150 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-05 00:15:16.135571 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-05 00:15:16.135675 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-05 00:15:16.135690 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-05 00:15:16.135703 | orchestrator | 2026-01-05 00:15:16.135715 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-05 00:15:18.058657 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-05 00:15:18.058795 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-05 00:15:18.058813 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-05 00:15:18.058826 | orchestrator | 2026-01-05 00:15:18.058838 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-05 00:15:18.704530 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:15:18.704621 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:18.704634 | orchestrator | 2026-01-05 00:15:18.704642 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-05 00:15:19.367807 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:15:19.367931 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:19.367950 | orchestrator | 2026-01-05 00:15:19.367963 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-05 00:15:19.431298 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:19.431423 | orchestrator | 2026-01-05 00:15:19.431449 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-05 00:15:19.828518 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:19.828625 | orchestrator | 2026-01-05 00:15:19.828643 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-05 00:15:19.894673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-05 00:15:19.894771 | orchestrator | 2026-01-05 00:15:19.894787 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-05 00:15:21.032565 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:21.032654 | orchestrator | 2026-01-05 00:15:21.032671 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-05 00:15:21.888654 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:21.888776 | orchestrator | 2026-01-05 00:15:21.888804 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-05 00:15:32.879273 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:32.879361 | orchestrator | 2026-01-05 00:15:32.879369 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-05 00:15:32.927551 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:32.927682 | orchestrator | 2026-01-05 00:15:32.927713 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-05 00:15:32.927735 | orchestrator | 2026-01-05 00:15:32.927753 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:15:34.911734 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:34.911865 | orchestrator | 2026-01-05 00:15:34.911884 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-05 00:15:35.031116 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-05 00:15:35.031249 | orchestrator | 2026-01-05 00:15:35.031277 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-05 00:15:35.086597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:15:35.086727 | orchestrator | 2026-01-05 00:15:35.086755 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-05 00:15:38.015125 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:38.015236 | orchestrator | 2026-01-05 00:15:38.015254 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-05 00:15:38.078738 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:38.078829 | orchestrator | 2026-01-05 00:15:38.078845 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-05 00:15:38.221265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-05 00:15:38.221370 | orchestrator | 2026-01-05 00:15:38.221388 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-05 00:15:41.235118 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-05 00:15:41.235226 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-05 00:15:41.235241 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-05 00:15:41.235254 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-05 00:15:41.235265 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-05 00:15:41.235276 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-05 00:15:41.235287 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-05 00:15:41.235298 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-05 00:15:41.235309 | orchestrator | 2026-01-05 00:15:41.235322 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-05 00:15:41.922702 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:41.922785 | orchestrator | 2026-01-05 00:15:41.922794 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-05 00:15:42.579810 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:42.579893 | orchestrator | 2026-01-05 00:15:42.579909 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-05 00:15:42.664313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-05 00:15:42.664408 | orchestrator | 2026-01-05 00:15:42.664423 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-05 00:15:43.942722 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-05 00:15:43.942826 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-05 00:15:43.942841 | orchestrator | 2026-01-05 00:15:43.942854 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-05 00:15:44.629168 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:44.629273 | orchestrator | 2026-01-05 00:15:44.629290 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-05 00:15:44.685972 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:44.686111 | orchestrator | 2026-01-05 00:15:44.686129 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-05 00:15:44.769765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-05 00:15:44.769932 | orchestrator | 2026-01-05 00:15:44.769986 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-05 00:15:45.436892 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:45.437040 | orchestrator | 2026-01-05 00:15:45.437057 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-05 00:15:45.511123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-05 00:15:45.511223 | orchestrator | 2026-01-05 00:15:45.511239 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-05 00:15:46.986271 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:15:46.986382 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:15:46.986397 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:46.986411 | orchestrator | 2026-01-05 00:15:46.986424 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-05 00:15:47.698613 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:47.698718 | orchestrator | 2026-01-05 00:15:47.698735 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-05 00:15:47.747359 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:47.747462 | orchestrator | 2026-01-05 00:15:47.747494 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-05 00:15:47.844846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-05 00:15:47.845275 | orchestrator | 2026-01-05 00:15:47.845299 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-05 00:15:48.386980 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:48.387080 | orchestrator | 2026-01-05 00:15:48.387095 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-05 00:15:48.814589 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:48.814681 | orchestrator | 2026-01-05 00:15:48.814696 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-05 00:15:50.151828 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-05 00:15:50.151906 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-05 00:15:50.151911 | orchestrator | 2026-01-05 00:15:50.151917 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-05 00:15:50.836817 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:50.836921 | orchestrator | 2026-01-05 00:15:50.836938 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-05 00:15:51.236628 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:51.236743 | orchestrator | 2026-01-05 00:15:51.236771 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-05 00:15:51.633546 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:51.633648 | orchestrator | 2026-01-05 00:15:51.633665 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-05 00:15:51.675125 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:51.675234 | orchestrator | 2026-01-05 00:15:51.675251 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-05 00:15:51.754705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-05 00:15:51.754842 | orchestrator | 2026-01-05 00:15:51.754872 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-05 00:15:51.812501 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:51.812609 | orchestrator | 2026-01-05 00:15:51.812626 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-05 00:15:53.956574 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-05 00:15:53.956658 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-05 00:15:53.956675 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-05 00:15:53.956687 | orchestrator | 2026-01-05 00:15:53.956704 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-05 00:15:54.719186 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:54.720494 | orchestrator | 2026-01-05 00:15:54.720531 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-05 00:15:55.465192 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:55.465311 | orchestrator | 2026-01-05 00:15:55.465328 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-05 00:15:56.181432 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:56.181542 | orchestrator | 2026-01-05 00:15:56.181559 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-05 00:15:56.258258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-05 00:15:56.258317 | orchestrator | 2026-01-05 00:15:56.258332 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-05 00:15:56.320176 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:56.320221 | orchestrator | 2026-01-05 00:15:56.320231 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-05 00:15:57.042907 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-05 00:15:57.043037 | orchestrator | 2026-01-05 00:15:57.043052 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-05 00:15:57.137872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-05 00:15:57.138102 | orchestrator | 2026-01-05 00:15:57.138138 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-05 00:15:57.893705 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:57.893823 | orchestrator | 2026-01-05 00:15:57.893841 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-05 00:15:58.513149 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:58.513261 | orchestrator | 2026-01-05 00:15:58.513277 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-05 00:15:58.575075 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:58.575185 | orchestrator | 2026-01-05 00:15:58.575201 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-05 00:15:58.634011 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:58.634167 | orchestrator | 2026-01-05 00:15:58.634186 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-05 00:15:59.537431 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:59.537545 | orchestrator | 2026-01-05 00:15:59.537562 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-05 00:17:14.738299 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:14.738405 | orchestrator | 2026-01-05 00:17:14.738422 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-05 00:17:15.779421 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:15.779565 | orchestrator | 2026-01-05 00:17:15.779604 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-05 00:17:15.839327 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:15.839420 | orchestrator | 2026-01-05 00:17:15.839434 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-05 00:17:18.879739 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:18.879860 | orchestrator | 2026-01-05 00:17:18.879876 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-05 00:17:18.937983 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:18.938114 | orchestrator | 2026-01-05 00:17:18.938130 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 00:17:18.938187 | orchestrator | 2026-01-05 00:17:18.938202 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-05 00:17:18.998959 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:18.999039 | orchestrator | 2026-01-05 00:17:18.999054 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-05 00:18:19.059820 | orchestrator | Pausing for 60 seconds 2026-01-05 00:18:19.059945 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:19.059961 | orchestrator | 2026-01-05 00:18:19.059976 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-05 00:18:22.191040 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:22.191159 | orchestrator | 2026-01-05 00:18:22.191174 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-05 00:19:24.298723 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-05 00:19:24.298838 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-05 00:19:24.298852 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-05 00:19:24.298864 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:24.298878 | orchestrator | 2026-01-05 00:19:24.298890 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-05 00:19:35.877573 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:35.877708 | orchestrator | 2026-01-05 00:19:35.877725 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-05 00:19:35.951113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-05 00:19:35.951212 | orchestrator | 2026-01-05 00:19:35.951226 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 00:19:35.951238 | orchestrator | 2026-01-05 00:19:35.951331 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-05 00:19:36.023386 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:36.023502 | orchestrator | 2026-01-05 00:19:36.023518 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-05 00:19:36.122899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-05 00:19:36.123014 | orchestrator | 2026-01-05 00:19:36.123032 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-05 00:19:36.931318 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:36.931425 | orchestrator | 2026-01-05 00:19:36.931445 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-05 00:19:40.263915 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:40.264126 | orchestrator | 2026-01-05 00:19:40.264143 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-05 00:19:40.332586 | orchestrator | ok: [testbed-manager] => { 2026-01-05 00:19:40.332696 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-05 00:19:40.332712 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-05 00:19:40.332723 | orchestrator | "Checking running containers against expected versions...", 2026-01-05 00:19:40.332735 | orchestrator | "", 2026-01-05 00:19:40.332747 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-05 00:19:40.332758 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-05 00:19:40.332769 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.332781 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-05 00:19:40.332792 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.332803 | orchestrator | "", 2026-01-05 00:19:40.332814 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-05 00:19:40.332825 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-05 00:19:40.332836 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.332847 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-05 00:19:40.332858 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.332869 | orchestrator | "", 2026-01-05 00:19:40.332880 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-05 00:19:40.332891 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-05 00:19:40.332902 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.332913 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-05 00:19:40.332924 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.332935 | orchestrator | "", 2026-01-05 00:19:40.332946 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-05 00:19:40.332983 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-05 00:19:40.332994 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333005 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-05 00:19:40.333016 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333027 | orchestrator | "", 2026-01-05 00:19:40.333038 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-05 00:19:40.333048 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-05 00:19:40.333059 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333070 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-05 00:19:40.333081 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333093 | orchestrator | "", 2026-01-05 00:19:40.333106 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-05 00:19:40.333120 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333132 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333145 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333159 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333171 | orchestrator | "", 2026-01-05 00:19:40.333183 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-05 00:19:40.333196 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 00:19:40.333209 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333230 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 00:19:40.333244 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333279 | orchestrator | "", 2026-01-05 00:19:40.333292 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-05 00:19:40.333305 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 00:19:40.333324 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333337 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 00:19:40.333350 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333363 | orchestrator | "", 2026-01-05 00:19:40.333376 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-05 00:19:40.333388 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-05 00:19:40.333402 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333414 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-05 00:19:40.333428 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333441 | orchestrator | "", 2026-01-05 00:19:40.333452 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-05 00:19:40.333463 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 00:19:40.333474 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333484 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 00:19:40.333495 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333506 | orchestrator | "", 2026-01-05 00:19:40.333517 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-05 00:19:40.333527 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333538 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333549 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333560 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333571 | orchestrator | "", 2026-01-05 00:19:40.333582 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-05 00:19:40.333592 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333603 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333614 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333625 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333635 | orchestrator | "", 2026-01-05 00:19:40.333646 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-05 00:19:40.333657 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333676 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333687 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333697 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333708 | orchestrator | "", 2026-01-05 00:19:40.333719 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-05 00:19:40.333730 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333740 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333751 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333762 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333773 | orchestrator | "", 2026-01-05 00:19:40.333783 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-05 00:19:40.333811 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333823 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.333834 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:19:40.333845 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.333856 | orchestrator | "", 2026-01-05 00:19:40.333867 | orchestrator | "=== Summary ===", 2026-01-05 00:19:40.333877 | orchestrator | "Errors (version mismatches): 0", 2026-01-05 00:19:40.333888 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-05 00:19:40.333899 | orchestrator | "", 2026-01-05 00:19:40.333910 | orchestrator | "✅ All running containers match expected versions!" 2026-01-05 00:19:40.333921 | orchestrator | ] 2026-01-05 00:19:40.333932 | orchestrator | } 2026-01-05 00:19:40.333944 | orchestrator | 2026-01-05 00:19:40.333955 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-05 00:19:40.371867 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:40.371960 | orchestrator | 2026-01-05 00:19:40.371974 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:19:40.371987 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-05 00:19:40.371999 | orchestrator | 2026-01-05 00:19:40.496906 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:19:40.496961 | orchestrator | + deactivate 2026-01-05 00:19:40.496975 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-05 00:19:40.496988 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:19:40.496998 | orchestrator | + export PATH 2026-01-05 00:19:40.497009 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-05 00:19:40.497022 | orchestrator | + '[' -n '' ']' 2026-01-05 00:19:40.497033 | orchestrator | + hash -r 2026-01-05 00:19:40.497043 | orchestrator | + '[' -n '' ']' 2026-01-05 00:19:40.497054 | orchestrator | + unset VIRTUAL_ENV 2026-01-05 00:19:40.497065 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-05 00:19:40.497076 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-05 00:19:40.497087 | orchestrator | + unset -f deactivate 2026-01-05 00:19:40.497098 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-05 00:19:40.504770 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 00:19:40.504804 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-05 00:19:40.504816 | orchestrator | + local max_attempts=60 2026-01-05 00:19:40.504827 | orchestrator | + local name=ceph-ansible 2026-01-05 00:19:40.504838 | orchestrator | + local attempt_num=1 2026-01-05 00:19:40.505474 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:19:40.536022 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:19:40.536068 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 00:19:40.536081 | orchestrator | + local max_attempts=60 2026-01-05 00:19:40.536093 | orchestrator | + local name=kolla-ansible 2026-01-05 00:19:40.536104 | orchestrator | + local attempt_num=1 2026-01-05 00:19:40.536516 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 00:19:40.565007 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:19:40.565048 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 00:19:40.565061 | orchestrator | + local max_attempts=60 2026-01-05 00:19:40.565073 | orchestrator | + local name=osism-ansible 2026-01-05 00:19:40.565084 | orchestrator | + local attempt_num=1 2026-01-05 00:19:40.565480 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 00:19:40.601879 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:19:40.601929 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-05 00:19:40.601943 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-05 00:19:41.312234 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-05 00:19:41.476937 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-05 00:19:41.477033 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.477043 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.477065 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-05 00:19:41.477074 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-05 00:19:41.477083 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.477120 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.477127 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.477134 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.477140 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-05 00:19:41.477147 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.477154 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-05 00:19:41.477161 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.477168 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-05 00:19:41.477175 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.477181 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.482613 | orchestrator | ++ semver latest 7.0.0 2026-01-05 00:19:41.543778 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:19:41.543868 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:19:41.543919 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-05 00:19:41.549625 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-05 00:19:53.907066 | orchestrator | 2026-01-05 00:19:53 | INFO  | Task 515d3e48-2921-4d3d-ba54-3e4ebce5c8e5 (resolvconf) was prepared for execution. 2026-01-05 00:19:53.907159 | orchestrator | 2026-01-05 00:19:53 | INFO  | It takes a moment until task 515d3e48-2921-4d3d-ba54-3e4ebce5c8e5 (resolvconf) has been started and output is visible here. 2026-01-05 00:20:08.478320 | orchestrator | 2026-01-05 00:20:08.478413 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-05 00:20:08.478428 | orchestrator | 2026-01-05 00:20:08.478439 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:20:08.478450 | orchestrator | Monday 05 January 2026 00:19:58 +0000 (0:00:00.132) 0:00:00.132 ******** 2026-01-05 00:20:08.478462 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:08.478473 | orchestrator | 2026-01-05 00:20:08.478484 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-05 00:20:08.478496 | orchestrator | Monday 05 January 2026 00:20:02 +0000 (0:00:04.464) 0:00:04.597 ******** 2026-01-05 00:20:08.478507 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:08.478518 | orchestrator | 2026-01-05 00:20:08.478529 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-05 00:20:08.478540 | orchestrator | Monday 05 January 2026 00:20:02 +0000 (0:00:00.059) 0:00:04.657 ******** 2026-01-05 00:20:08.478551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-05 00:20:08.478562 | orchestrator | 2026-01-05 00:20:08.478582 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-05 00:20:08.478593 | orchestrator | Monday 05 January 2026 00:20:02 +0000 (0:00:00.087) 0:00:04.745 ******** 2026-01-05 00:20:08.478604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:20:08.478615 | orchestrator | 2026-01-05 00:20:08.478627 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-05 00:20:08.478638 | orchestrator | Monday 05 January 2026 00:20:02 +0000 (0:00:00.059) 0:00:04.804 ******** 2026-01-05 00:20:08.478648 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:08.478659 | orchestrator | 2026-01-05 00:20:08.478670 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-05 00:20:08.478681 | orchestrator | Monday 05 January 2026 00:20:03 +0000 (0:00:00.995) 0:00:05.800 ******** 2026-01-05 00:20:08.478692 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:08.478702 | orchestrator | 2026-01-05 00:20:08.478713 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-05 00:20:08.478724 | orchestrator | Monday 05 January 2026 00:20:03 +0000 (0:00:00.065) 0:00:05.865 ******** 2026-01-05 00:20:08.478735 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:08.478745 | orchestrator | 2026-01-05 00:20:08.478756 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-05 00:20:08.478767 | orchestrator | Monday 05 January 2026 00:20:04 +0000 (0:00:00.464) 0:00:06.330 ******** 2026-01-05 00:20:08.478778 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:08.478788 | orchestrator | 2026-01-05 00:20:08.478799 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-05 00:20:08.478811 | orchestrator | Monday 05 January 2026 00:20:04 +0000 (0:00:00.063) 0:00:06.393 ******** 2026-01-05 00:20:08.478821 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:08.478832 | orchestrator | 2026-01-05 00:20:08.478843 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-05 00:20:08.478856 | orchestrator | Monday 05 January 2026 00:20:04 +0000 (0:00:00.503) 0:00:06.897 ******** 2026-01-05 00:20:08.478870 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:08.478903 | orchestrator | 2026-01-05 00:20:08.478917 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-05 00:20:08.478929 | orchestrator | Monday 05 January 2026 00:20:05 +0000 (0:00:01.074) 0:00:07.972 ******** 2026-01-05 00:20:08.478941 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:08.478954 | orchestrator | 2026-01-05 00:20:08.478968 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-05 00:20:08.478981 | orchestrator | Monday 05 January 2026 00:20:06 +0000 (0:00:00.992) 0:00:08.965 ******** 2026-01-05 00:20:08.478993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-05 00:20:08.479006 | orchestrator | 2026-01-05 00:20:08.479020 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-05 00:20:08.479033 | orchestrator | Monday 05 January 2026 00:20:07 +0000 (0:00:00.080) 0:00:09.045 ******** 2026-01-05 00:20:08.479046 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:08.479058 | orchestrator | 2026-01-05 00:20:08.479072 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:20:08.479085 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:20:08.479098 | orchestrator | 2026-01-05 00:20:08.479110 | orchestrator | 2026-01-05 00:20:08.479124 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:20:08.479136 | orchestrator | Monday 05 January 2026 00:20:08 +0000 (0:00:01.156) 0:00:10.202 ******** 2026-01-05 00:20:08.479149 | orchestrator | =============================================================================== 2026-01-05 00:20:08.479162 | orchestrator | Gathering Facts --------------------------------------------------------- 4.46s 2026-01-05 00:20:08.479176 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2026-01-05 00:20:08.479189 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2026-01-05 00:20:08.479202 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.00s 2026-01-05 00:20:08.479213 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2026-01-05 00:20:08.479224 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2026-01-05 00:20:08.479250 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2026-01-05 00:20:08.479261 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-01-05 00:20:08.479272 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-05 00:20:08.479283 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-05 00:20:08.479373 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2026-01-05 00:20:08.479395 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-01-05 00:20:08.479406 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-01-05 00:20:08.806237 | orchestrator | + osism apply sshconfig 2026-01-05 00:20:20.984178 | orchestrator | 2026-01-05 00:20:20 | INFO  | Task c32feb9a-ed85-49bd-ab6d-20b619fd9010 (sshconfig) was prepared for execution. 2026-01-05 00:20:20.984293 | orchestrator | 2026-01-05 00:20:20 | INFO  | It takes a moment until task c32feb9a-ed85-49bd-ab6d-20b619fd9010 (sshconfig) has been started and output is visible here. 2026-01-05 00:20:33.779823 | orchestrator | 2026-01-05 00:20:33.779920 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-05 00:20:33.779929 | orchestrator | 2026-01-05 00:20:33.779935 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-05 00:20:33.779941 | orchestrator | Monday 05 January 2026 00:20:25 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-01-05 00:20:33.779972 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:33.779979 | orchestrator | 2026-01-05 00:20:33.779984 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-05 00:20:33.779989 | orchestrator | Monday 05 January 2026 00:20:25 +0000 (0:00:00.553) 0:00:00.719 ******** 2026-01-05 00:20:33.779994 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:33.780000 | orchestrator | 2026-01-05 00:20:33.780005 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-05 00:20:33.780010 | orchestrator | Monday 05 January 2026 00:20:26 +0000 (0:00:00.543) 0:00:01.262 ******** 2026-01-05 00:20:33.780015 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:20:33.780021 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:20:33.780026 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:20:33.780031 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:20:33.780036 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:20:33.780041 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:20:33.780045 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:20:33.780050 | orchestrator | 2026-01-05 00:20:33.780055 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-05 00:20:33.780060 | orchestrator | Monday 05 January 2026 00:20:32 +0000 (0:00:06.340) 0:00:07.603 ******** 2026-01-05 00:20:33.780065 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:33.780070 | orchestrator | 2026-01-05 00:20:33.780075 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-05 00:20:33.780080 | orchestrator | Monday 05 January 2026 00:20:32 +0000 (0:00:00.068) 0:00:07.672 ******** 2026-01-05 00:20:33.780085 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:33.780090 | orchestrator | 2026-01-05 00:20:33.780095 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:20:33.780102 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:20:33.780108 | orchestrator | 2026-01-05 00:20:33.780112 | orchestrator | 2026-01-05 00:20:33.780118 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:20:33.780122 | orchestrator | Monday 05 January 2026 00:20:33 +0000 (0:00:00.600) 0:00:08.273 ******** 2026-01-05 00:20:33.780127 | orchestrator | =============================================================================== 2026-01-05 00:20:33.780132 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.34s 2026-01-05 00:20:33.780137 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-01-05 00:20:33.780142 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2026-01-05 00:20:33.780147 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-01-05 00:20:33.780152 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-05 00:20:34.095792 | orchestrator | + osism apply known-hosts 2026-01-05 00:20:46.427580 | orchestrator | 2026-01-05 00:20:46 | INFO  | Task 747797dd-444d-425b-9ce6-e85d3a366a69 (known-hosts) was prepared for execution. 2026-01-05 00:20:46.427700 | orchestrator | 2026-01-05 00:20:46 | INFO  | It takes a moment until task 747797dd-444d-425b-9ce6-e85d3a366a69 (known-hosts) has been started and output is visible here. 2026-01-05 00:21:04.485218 | orchestrator | 2026-01-05 00:21:04.485337 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-05 00:21:04.485354 | orchestrator | 2026-01-05 00:21:04.485430 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-05 00:21:04.485443 | orchestrator | Monday 05 January 2026 00:20:50 +0000 (0:00:00.167) 0:00:00.167 ******** 2026-01-05 00:21:04.485455 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:21:04.485499 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:21:04.485511 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:21:04.485522 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:21:04.485532 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:21:04.485544 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:21:04.485554 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:21:04.485565 | orchestrator | 2026-01-05 00:21:04.485588 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-05 00:21:04.485602 | orchestrator | Monday 05 January 2026 00:20:57 +0000 (0:00:06.344) 0:00:06.512 ******** 2026-01-05 00:21:04.485615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-05 00:21:04.485628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-05 00:21:04.485639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-05 00:21:04.485650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-05 00:21:04.485661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-05 00:21:04.485672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-05 00:21:04.485682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-05 00:21:04.485693 | orchestrator | 2026-01-05 00:21:04.485705 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:04.485718 | orchestrator | Monday 05 January 2026 00:20:57 +0000 (0:00:00.214) 0:00:06.727 ******** 2026-01-05 00:21:04.485735 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8CEPakx6nQUBEXN/S++QApEVnbH8mniLVp65B1pE0aQVrWONiG2JlBCbEXyBNVE9x3wzbSWRDkXV31caTjJCC+unWS5bMozm9P8/ljKgXhBDabB0FsVv4l4QwT4IHtsl8yYvMPVc7tqVKRHR58a/RZUFKfMzHpNMsCfABlZUqJR40KpoH3rEWJZtmQJckWlIO/ZcO/M2ftFzlTI9hZHfUEkHo9QoXFnpow666kXZaAlL34+VZLFopDwbaqoTc/MSwSivwIKLJ0viqXDPrz0OBIuF25GMO0tohWKeFjCrGmF6hQzh7YWtX0BCI0ZulhrGv60y5AjKE1R4pfdnK8HcpgrNO6OGeKiV2VwN0WxpTrSEd7UVw03igLacRtb0r2fO0CUHp2TsCSWRo3AluhXBMO7a7LaUW6sZ5vQ253OK2tnBpH4aHvAn0zuJbfwEwp27PyC2o1774uKz3toO+2QmUfMKpGDHXlXh8b1TBhoFAWLa+8pmBnBHIr0RiEXHIGrE=) 2026-01-05 00:21:04.485753 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL7ROWxVKZXl2Vtg2b4KMFLSRBDBdvpJIOS9lHmXaNj1bH48ybeShfqyIWjnLV7MQpdP0jV4gpdjDhqXcszBLRE=) 2026-01-05 00:21:04.485768 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKlivoPIgZ/VDkj8a1tYoVNVElHU8p8np/eA9E6sWmnB) 2026-01-05 00:21:04.485783 | orchestrator | 2026-01-05 00:21:04.485796 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:04.485809 | orchestrator | Monday 05 January 2026 00:20:58 +0000 (0:00:01.312) 0:00:08.039 ******** 2026-01-05 00:21:04.485847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUAO8SLfB1rXjcwRxqdg2hvQzwLwHn7Wnn2yh27zEAd3at3J6e8gucXFyfdXdADE73+fPRCSQhIqlYkSIKBpCU7bx4yyjSobCZkOO0Awx0Cwv5/YmSiMUHwGl0LN2ICgqyxzxuOgctxD7WgMTfqr7JP9M+6pzn08QlsDHOdyavqWN/3HVwbCA97d2lmY+ir26qXkewyc/w24ckqdC+LNikYxlTj1UebSJsplcVfHiTVH18h5x7olsu2OrUAYYkhxmJA14pMgCSuQhcknNfR4YsQXWMBS6scAhj0Le4dUtfL0TprzsDNf7LXzd9VJuwYg+SIL/C60yjwOoT8Eze200PDCfC/ZNqh/jANYh5DvQWrm8usWblwJ+vWFUeGu3dwj+Rbh6KZVPcMSARbC1lHoQh6QEOCKTC1nhGVET3Cz9+Ha25nXoITtAhHwhhRhXkPFUEbXQ8NpVgS5ltPz8NRVrKzYYks/5tfz2XEMLHKbq5tfvGOdqPGAiKGKlO7YsF5VM=) 2026-01-05 00:21:04.485872 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGUFMHul9GvUKjVebUoaKjX4B5ZUGJ96QnLcdVIWy6ALg+yuI7UYiNWE6ghcPwW3Ar+m3uRKqaDWpni1IBed7CM=) 2026-01-05 00:21:04.485884 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgrmKF1f3rrYB4KAlA8jCl8r08EhDmRYUuLjJNO9bbx) 2026-01-05 00:21:04.485894 | orchestrator | 2026-01-05 00:21:04.485906 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:04.485916 | orchestrator | Monday 05 January 2026 00:20:59 +0000 (0:00:01.216) 0:00:09.256 ******** 2026-01-05 00:21:04.485997 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTNvIE7TZrUHXICbORA/Q0mLby1+K0fPdQ6b+GuRyXzy9x6LoiWHkSA7taCMPMbdQISzKxt8LTIGyZs99ZnNIS+Y5Sn6r+FP8ugcHpSOFkSRI2KXrX0pX/LSczGOzyD0N9XxInTB5R2XRu4t7M0SUlFq7Zye6UAQjfHli1nc/SVzJpvtTcdwgrTiPT39WudC4HwNejICbsfocdxPQKhKx1QwWwwXBwvPUijuC18/Iws408JEA/WG4atMwdLhDA1YC9tgpcIuXW4NMced64bP69tUfK4OhLBt8WN/z87nGnN1o7Y2GnWLNT1I7pmGp76JT4KGXCO1bS1Y5dJ5jaBX4U+EQsGS9nSDTrC8onjaJUMXL4RZKt3v4SUWVjBR/GMpj2SrbV7+Pd281hQF6shtCkeyYW9i5Z4RwuoCgOMNjRl7+T+huyfbTv2bvA3mMvcT3Tv0hpxqSUVWh+QNpA9zUont2RtnGhiQt0f3eNFALrQTC3rsxW+g02NWDthRGylj8=) 2026-01-05 00:21:04.486077 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKm8UQmeNwSDr2DGEOD2qFq5WfTULNzhDDihHc8o8bvW) 2026-01-05 00:21:04.486091 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMwO0FaUe6UeSSgvYiS7OarrQNi2fwZW+9YuNqftPxi8qFrWEv1kGGoSYp3YRl5/2iGSX3i+w7Ud09b+g1Sh/xk=) 2026-01-05 00:21:04.486102 | orchestrator | 2026-01-05 00:21:04.486113 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:04.486124 | orchestrator | Monday 05 January 2026 00:21:00 +0000 (0:00:01.187) 0:00:10.443 ******** 2026-01-05 00:21:04.486136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCywMvl1VYWEXmdh3n6kGZXaFU8H/amf6zYCUv7xCLBApP8u1c1vUjZthe2/NURa4ovSHMmVqAbIEK+zeips1HtXzugCqUPgmAsowJp3EAnTk49RrCZuGm3bZFism1Y1Q9j99ZUc6MunNq2nsLGc2pDIXb78QlrlA74VDpRiXwd/RtTER6UPX5CCPOuYJmSdcim3UNj0zi+cHi/tlFDuyetwtUxjqUhA/gkez9xC7jJXPfrqtnjyZtz1ROsYh2Pc6puB8qXrj5bjxEiVSSBpCJF75axDqi59rveCvh4Agpoh3U6T1+TpfrlEiTaAm/UvWnvwgVp+ynM/5ao0zbnw6Ks9G0O3rZmjhjvvI/tBgPdb/efieiES41GA/QfWz+hhlL69U6JvGjWFlDmW2dNqVGQXhMyaJpWkfeRhBVssDAAIWGjU7IS01ceJT9hKkXOpwiFAFHP70e4Jgg9H/EU10cKq+NeMvdVcGk9Yz3C4LCXEGf9Xrv4SE0R28w0FkEwM0k=) 2026-01-05 00:21:04.486147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBkqQaoBsMoAoagxLMgKP7G0vGTEO5hmxdKv7nP7Fnh+t3xoLX8sktmRQFiCPiw9H1kuY5GSMuIEqTbf9BXtvq8=) 2026-01-05 00:21:04.486158 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPofk4Osj/tkyPrsGjCwAmrlmHyyA79h6g3qhn0Dh9wh) 2026-01-05 00:21:04.486169 | orchestrator | 2026-01-05 00:21:04.486181 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:04.486191 | orchestrator | Monday 05 January 2026 00:21:02 +0000 (0:00:01.169) 0:00:11.613 ******** 2026-01-05 00:21:04.486202 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPFCaCOQs6Ol0tTChVElEpwz4AaH4NNTmvoOu7+Ab4Pf) 2026-01-05 00:21:04.486213 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx68N1YolcDBzrkcnHoEqsbXp74+kwKmZZb4nsWX3po6GguuCw1ZYuC5EoncrnXzLijb0ldpIPDsnEfSkLl4SGhhLZV6ct73TGl/XyMNN0JqymVHxNxpoXIERCFxlkFOrbMrGEC32JtdbPtnpxbz+HA2BKvvjnhNpKYH7RQPQ6GJtKsAIrsfXnKwPA/5En7qkdfgr8XHdTUI8BEaLSixxLhD67tFd5oaT0zWY4vke7Mk8ZFQIWmn3j/1N+fRV0pG/PzSth8j2QU9arGSCSvGORRJ0dmLAbKOyAZ7u2i4wUqT7eJ6A/YYgRXRXRsVSShHo7ixLNMRlaw4gAwEziVuCtQq40zbb+8/iHaiBaMGnGaHVm9aKgO04+n6WwCWIjHFHNOCUPF2fs+rwRn+iKBKH1/+cTLu/V1B8xl+aFKhS1/wVi7i6RM/2zaMnrlCuqHDKOlYA4tXVj5OMZivhBKDGo/8DrbZ2DCzQmQqMQtsO33F/ioDF5kR/ndB1BpZzT9mk=) 2026-01-05 00:21:04.486232 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCPbMLJONfNyX12cLIlIscq65FxCcWNlSg1vl1Y1KepQD7I0K+Fivuh37qYTXfmG22Zm3/oUnPOryYMWUgoomo=) 2026-01-05 00:21:04.486243 | orchestrator | 2026-01-05 00:21:04.486254 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:04.486265 | orchestrator | Monday 05 January 2026 00:21:03 +0000 (0:00:01.207) 0:00:12.821 ******** 2026-01-05 00:21:04.486286 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH6E+vukt5ci1CrLlrd6MXLW8JmnNj0UDEuDX4da/t6BBUr+nOG/6WYXxG2Ya5UvI3gZg1L1CIzmIyVTZLrKjmTkWDXoWSqfPvCT0xZs5h07+zDQ81nYeMoys1fp7y+UMTpAvBQxKI2oqOsihh98BcAYp890hwzAaUq9tQQgAe6JBudbDUsTI+uYr6/HJlijdeA/XLyXuV6Hj+Fphb/kZlozb+EnbM3/7+2087HVkNMAhAgCMSS4czLi5a3o1t8fVtlaEWp1UXjmTAOToN3byn4DxwlV0l4++4p89qiGtbUw2B8HWchaHLhdz71Z+H8dQTQV1LnvhS/kxkOdbiQRRIpGumoy8+BU4gaYL9YJ+HJhSKdKeT4YLx42ODnlSTtVMYpGQVwPgcptLaPV+SG9IoqE8V+jsgPekuZ/hKwjWuZJT4FnN1mevoqv4WNI9QnpnHdTu2v4KFN73Xw5FyXROVkAKtRp3uBkF+TIfOSvO6VQxJAjodGZBtGeOUZkv8JVM=) 2026-01-05 00:21:15.638577 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIo+4WV1VOnONXzRZWQ9so9F9FxqWfA8MuNbv00kmtCzvMHKM1LZV662Itpp/Azi7OU+Xgtc/EB1kuA/yH9GzlE=) 2026-01-05 00:21:15.638699 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDPkyXbfMGQU9uczlgX4HRGsQAwCZnBd+JAXf7EyuRWU) 2026-01-05 00:21:15.638716 | orchestrator | 2026-01-05 00:21:15.638729 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:15.638741 | orchestrator | Monday 05 January 2026 00:21:04 +0000 (0:00:01.118) 0:00:13.939 ******** 2026-01-05 00:21:15.638755 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCE4XcNlR+zy/ZlhB46AR/VlvfxFoZB6Zqfas4+MJmBQHpoEjCkZtwb5WElAQTxmVeiOEeDIROGAcRRGaoMLnrzHQMR/mklt10f+bQYJiT1WlPM2HzjRgGHa7wuP/th3QcNJtVJEoXyozpvawPnYrCvaL8st9T6tUj37cAHvs9rX3stkvpf5k8/VfvHywJpMSouks2xlDbosN2A6gdvhyI/SuD/DW/C2djVzQUxYQWFLfDCsM9poP7wMfDqZSCClZqMqla+m2MaXY0z5vyX6Vqpmbz+dg1mc7cUGlzUK/h9B/cSq6qPqoGjyuOb/X8SpMvCUDE6CSUlJ7frmNpxQJfHU4leeGULYYKPUL2TXENk04leoU5fvQSSHxzYkT+sOmzbC9sDOVw5yrKnMlGHmpVxg3fWI6dJi9XhlB7TTw1ITxqVLIxtxlsyzl5esH1LkkpSE1RjwdbbmQ53AGq/QFPlv4awQzAKl6TVQbo+B4o8/vd32dMuO/+JKSfjJ9JtE8=) 2026-01-05 00:21:15.638770 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPK96t4KvUFKukEi6lFQ8GdPrJctqm/2VvDAgik6IhdjrSkm4ft8wPtZCsNxUv5DzPcC1whc6Sf1xwT+oimEgXc=) 2026-01-05 00:21:15.638781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGBgbOiOXg6owBBIS4by6b9UaP/hLAdKUtFxVzjBJVga) 2026-01-05 00:21:15.638792 | orchestrator | 2026-01-05 00:21:15.638804 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-05 00:21:15.638837 | orchestrator | Monday 05 January 2026 00:21:05 +0000 (0:00:01.139) 0:00:15.079 ******** 2026-01-05 00:21:15.638850 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:21:15.638861 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:21:15.638872 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:21:15.638883 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:21:15.638893 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:21:15.638930 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:21:15.638942 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:21:15.638952 | orchestrator | 2026-01-05 00:21:15.638963 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-05 00:21:15.638975 | orchestrator | Monday 05 January 2026 00:21:11 +0000 (0:00:05.622) 0:00:20.702 ******** 2026-01-05 00:21:15.638988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-05 00:21:15.639001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-05 00:21:15.639012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-05 00:21:15.639022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-05 00:21:15.639033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-05 00:21:15.639044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-05 00:21:15.639055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-05 00:21:15.639065 | orchestrator | 2026-01-05 00:21:15.639076 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:15.639089 | orchestrator | Monday 05 January 2026 00:21:11 +0000 (0:00:00.195) 0:00:20.897 ******** 2026-01-05 00:21:15.639150 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8CEPakx6nQUBEXN/S++QApEVnbH8mniLVp65B1pE0aQVrWONiG2JlBCbEXyBNVE9x3wzbSWRDkXV31caTjJCC+unWS5bMozm9P8/ljKgXhBDabB0FsVv4l4QwT4IHtsl8yYvMPVc7tqVKRHR58a/RZUFKfMzHpNMsCfABlZUqJR40KpoH3rEWJZtmQJckWlIO/ZcO/M2ftFzlTI9hZHfUEkHo9QoXFnpow666kXZaAlL34+VZLFopDwbaqoTc/MSwSivwIKLJ0viqXDPrz0OBIuF25GMO0tohWKeFjCrGmF6hQzh7YWtX0BCI0ZulhrGv60y5AjKE1R4pfdnK8HcpgrNO6OGeKiV2VwN0WxpTrSEd7UVw03igLacRtb0r2fO0CUHp2TsCSWRo3AluhXBMO7a7LaUW6sZ5vQ253OK2tnBpH4aHvAn0zuJbfwEwp27PyC2o1774uKz3toO+2QmUfMKpGDHXlXh8b1TBhoFAWLa+8pmBnBHIr0RiEXHIGrE=) 2026-01-05 00:21:15.639177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKlivoPIgZ/VDkj8a1tYoVNVElHU8p8np/eA9E6sWmnB) 2026-01-05 00:21:15.639190 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL7ROWxVKZXl2Vtg2b4KMFLSRBDBdvpJIOS9lHmXaNj1bH48ybeShfqyIWjnLV7MQpdP0jV4gpdjDhqXcszBLRE=) 2026-01-05 00:21:15.639203 | orchestrator | 2026-01-05 00:21:15.639216 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:15.639229 | orchestrator | Monday 05 January 2026 00:21:12 +0000 (0:00:01.118) 0:00:22.016 ******** 2026-01-05 00:21:15.639242 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgrmKF1f3rrYB4KAlA8jCl8r08EhDmRYUuLjJNO9bbx) 2026-01-05 00:21:15.639255 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUAO8SLfB1rXjcwRxqdg2hvQzwLwHn7Wnn2yh27zEAd3at3J6e8gucXFyfdXdADE73+fPRCSQhIqlYkSIKBpCU7bx4yyjSobCZkOO0Awx0Cwv5/YmSiMUHwGl0LN2ICgqyxzxuOgctxD7WgMTfqr7JP9M+6pzn08QlsDHOdyavqWN/3HVwbCA97d2lmY+ir26qXkewyc/w24ckqdC+LNikYxlTj1UebSJsplcVfHiTVH18h5x7olsu2OrUAYYkhxmJA14pMgCSuQhcknNfR4YsQXWMBS6scAhj0Le4dUtfL0TprzsDNf7LXzd9VJuwYg+SIL/C60yjwOoT8Eze200PDCfC/ZNqh/jANYh5DvQWrm8usWblwJ+vWFUeGu3dwj+Rbh6KZVPcMSARbC1lHoQh6QEOCKTC1nhGVET3Cz9+Ha25nXoITtAhHwhhRhXkPFUEbXQ8NpVgS5ltPz8NRVrKzYYks/5tfz2XEMLHKbq5tfvGOdqPGAiKGKlO7YsF5VM=) 2026-01-05 00:21:15.639280 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGUFMHul9GvUKjVebUoaKjX4B5ZUGJ96QnLcdVIWy6ALg+yuI7UYiNWE6ghcPwW3Ar+m3uRKqaDWpni1IBed7CM=) 2026-01-05 00:21:15.639293 | orchestrator | 2026-01-05 00:21:15.639306 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:15.639319 | orchestrator | Monday 05 January 2026 00:21:13 +0000 (0:00:01.098) 0:00:23.115 ******** 2026-01-05 00:21:15.639333 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTNvIE7TZrUHXICbORA/Q0mLby1+K0fPdQ6b+GuRyXzy9x6LoiWHkSA7taCMPMbdQISzKxt8LTIGyZs99ZnNIS+Y5Sn6r+FP8ugcHpSOFkSRI2KXrX0pX/LSczGOzyD0N9XxInTB5R2XRu4t7M0SUlFq7Zye6UAQjfHli1nc/SVzJpvtTcdwgrTiPT39WudC4HwNejICbsfocdxPQKhKx1QwWwwXBwvPUijuC18/Iws408JEA/WG4atMwdLhDA1YC9tgpcIuXW4NMced64bP69tUfK4OhLBt8WN/z87nGnN1o7Y2GnWLNT1I7pmGp76JT4KGXCO1bS1Y5dJ5jaBX4U+EQsGS9nSDTrC8onjaJUMXL4RZKt3v4SUWVjBR/GMpj2SrbV7+Pd281hQF6shtCkeyYW9i5Z4RwuoCgOMNjRl7+T+huyfbTv2bvA3mMvcT3Tv0hpxqSUVWh+QNpA9zUont2RtnGhiQt0f3eNFALrQTC3rsxW+g02NWDthRGylj8=) 2026-01-05 00:21:15.639346 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMwO0FaUe6UeSSgvYiS7OarrQNi2fwZW+9YuNqftPxi8qFrWEv1kGGoSYp3YRl5/2iGSX3i+w7Ud09b+g1Sh/xk=) 2026-01-05 00:21:15.639358 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKm8UQmeNwSDr2DGEOD2qFq5WfTULNzhDDihHc8o8bvW) 2026-01-05 00:21:15.639394 | orchestrator | 2026-01-05 00:21:15.639407 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:15.639420 | orchestrator | Monday 05 January 2026 00:21:14 +0000 (0:00:01.002) 0:00:24.118 ******** 2026-01-05 00:21:15.639438 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBkqQaoBsMoAoagxLMgKP7G0vGTEO5hmxdKv7nP7Fnh+t3xoLX8sktmRQFiCPiw9H1kuY5GSMuIEqTbf9BXtvq8=) 2026-01-05 00:21:15.639452 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCywMvl1VYWEXmdh3n6kGZXaFU8H/amf6zYCUv7xCLBApP8u1c1vUjZthe2/NURa4ovSHMmVqAbIEK+zeips1HtXzugCqUPgmAsowJp3EAnTk49RrCZuGm3bZFism1Y1Q9j99ZUc6MunNq2nsLGc2pDIXb78QlrlA74VDpRiXwd/RtTER6UPX5CCPOuYJmSdcim3UNj0zi+cHi/tlFDuyetwtUxjqUhA/gkez9xC7jJXPfrqtnjyZtz1ROsYh2Pc6puB8qXrj5bjxEiVSSBpCJF75axDqi59rveCvh4Agpoh3U6T1+TpfrlEiTaAm/UvWnvwgVp+ynM/5ao0zbnw6Ks9G0O3rZmjhjvvI/tBgPdb/efieiES41GA/QfWz+hhlL69U6JvGjWFlDmW2dNqVGQXhMyaJpWkfeRhBVssDAAIWGjU7IS01ceJT9hKkXOpwiFAFHP70e4Jgg9H/EU10cKq+NeMvdVcGk9Yz3C4LCXEGf9Xrv4SE0R28w0FkEwM0k=) 2026-01-05 00:21:15.639478 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPofk4Osj/tkyPrsGjCwAmrlmHyyA79h6g3qhn0Dh9wh) 2026-01-05 00:21:19.720047 | orchestrator | 2026-01-05 00:21:19.720188 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:19.720213 | orchestrator | Monday 05 January 2026 00:21:15 +0000 (0:00:00.974) 0:00:25.092 ******** 2026-01-05 00:21:19.720231 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPFCaCOQs6Ol0tTChVElEpwz4AaH4NNTmvoOu7+Ab4Pf) 2026-01-05 00:21:19.720277 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx68N1YolcDBzrkcnHoEqsbXp74+kwKmZZb4nsWX3po6GguuCw1ZYuC5EoncrnXzLijb0ldpIPDsnEfSkLl4SGhhLZV6ct73TGl/XyMNN0JqymVHxNxpoXIERCFxlkFOrbMrGEC32JtdbPtnpxbz+HA2BKvvjnhNpKYH7RQPQ6GJtKsAIrsfXnKwPA/5En7qkdfgr8XHdTUI8BEaLSixxLhD67tFd5oaT0zWY4vke7Mk8ZFQIWmn3j/1N+fRV0pG/PzSth8j2QU9arGSCSvGORRJ0dmLAbKOyAZ7u2i4wUqT7eJ6A/YYgRXRXRsVSShHo7ixLNMRlaw4gAwEziVuCtQq40zbb+8/iHaiBaMGnGaHVm9aKgO04+n6WwCWIjHFHNOCUPF2fs+rwRn+iKBKH1/+cTLu/V1B8xl+aFKhS1/wVi7i6RM/2zaMnrlCuqHDKOlYA4tXVj5OMZivhBKDGo/8DrbZ2DCzQmQqMQtsO33F/ioDF5kR/ndB1BpZzT9mk=) 2026-01-05 00:21:19.720337 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCPbMLJONfNyX12cLIlIscq65FxCcWNlSg1vl1Y1KepQD7I0K+Fivuh37qYTXfmG22Zm3/oUnPOryYMWUgoomo=) 2026-01-05 00:21:19.720357 | orchestrator | 2026-01-05 00:21:19.720431 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:19.720449 | orchestrator | Monday 05 January 2026 00:21:16 +0000 (0:00:01.005) 0:00:26.097 ******** 2026-01-05 00:21:19.720467 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH6E+vukt5ci1CrLlrd6MXLW8JmnNj0UDEuDX4da/t6BBUr+nOG/6WYXxG2Ya5UvI3gZg1L1CIzmIyVTZLrKjmTkWDXoWSqfPvCT0xZs5h07+zDQ81nYeMoys1fp7y+UMTpAvBQxKI2oqOsihh98BcAYp890hwzAaUq9tQQgAe6JBudbDUsTI+uYr6/HJlijdeA/XLyXuV6Hj+Fphb/kZlozb+EnbM3/7+2087HVkNMAhAgCMSS4czLi5a3o1t8fVtlaEWp1UXjmTAOToN3byn4DxwlV0l4++4p89qiGtbUw2B8HWchaHLhdz71Z+H8dQTQV1LnvhS/kxkOdbiQRRIpGumoy8+BU4gaYL9YJ+HJhSKdKeT4YLx42ODnlSTtVMYpGQVwPgcptLaPV+SG9IoqE8V+jsgPekuZ/hKwjWuZJT4FnN1mevoqv4WNI9QnpnHdTu2v4KFN73Xw5FyXROVkAKtRp3uBkF+TIfOSvO6VQxJAjodGZBtGeOUZkv8JVM=) 2026-01-05 00:21:19.720486 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIo+4WV1VOnONXzRZWQ9so9F9FxqWfA8MuNbv00kmtCzvMHKM1LZV662Itpp/Azi7OU+Xgtc/EB1kuA/yH9GzlE=) 2026-01-05 00:21:19.720504 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDPkyXbfMGQU9uczlgX4HRGsQAwCZnBd+JAXf7EyuRWU) 2026-01-05 00:21:19.720520 | orchestrator | 2026-01-05 00:21:19.720537 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:19.720553 | orchestrator | Monday 05 January 2026 00:21:17 +0000 (0:00:00.978) 0:00:27.075 ******** 2026-01-05 00:21:19.720571 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGBgbOiOXg6owBBIS4by6b9UaP/hLAdKUtFxVzjBJVga) 2026-01-05 00:21:19.720589 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCE4XcNlR+zy/ZlhB46AR/VlvfxFoZB6Zqfas4+MJmBQHpoEjCkZtwb5WElAQTxmVeiOEeDIROGAcRRGaoMLnrzHQMR/mklt10f+bQYJiT1WlPM2HzjRgGHa7wuP/th3QcNJtVJEoXyozpvawPnYrCvaL8st9T6tUj37cAHvs9rX3stkvpf5k8/VfvHywJpMSouks2xlDbosN2A6gdvhyI/SuD/DW/C2djVzQUxYQWFLfDCsM9poP7wMfDqZSCClZqMqla+m2MaXY0z5vyX6Vqpmbz+dg1mc7cUGlzUK/h9B/cSq6qPqoGjyuOb/X8SpMvCUDE6CSUlJ7frmNpxQJfHU4leeGULYYKPUL2TXENk04leoU5fvQSSHxzYkT+sOmzbC9sDOVw5yrKnMlGHmpVxg3fWI6dJi9XhlB7TTw1ITxqVLIxtxlsyzl5esH1LkkpSE1RjwdbbmQ53AGq/QFPlv4awQzAKl6TVQbo+B4o8/vd32dMuO/+JKSfjJ9JtE8=) 2026-01-05 00:21:19.720607 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPK96t4KvUFKukEi6lFQ8GdPrJctqm/2VvDAgik6IhdjrSkm4ft8wPtZCsNxUv5DzPcC1whc6Sf1xwT+oimEgXc=) 2026-01-05 00:21:19.720624 | orchestrator | 2026-01-05 00:21:19.720640 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-05 00:21:19.720658 | orchestrator | Monday 05 January 2026 00:21:18 +0000 (0:00:00.946) 0:00:28.022 ******** 2026-01-05 00:21:19.720676 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-05 00:21:19.720695 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-05 00:21:19.720712 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-05 00:21:19.720728 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-05 00:21:19.720744 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 00:21:19.720762 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-05 00:21:19.720780 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-05 00:21:19.720798 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:21:19.720816 | orchestrator | 2026-01-05 00:21:19.720855 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-05 00:21:19.720885 | orchestrator | Monday 05 January 2026 00:21:18 +0000 (0:00:00.149) 0:00:28.171 ******** 2026-01-05 00:21:19.720904 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:21:19.720921 | orchestrator | 2026-01-05 00:21:19.720937 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-05 00:21:19.720952 | orchestrator | Monday 05 January 2026 00:21:18 +0000 (0:00:00.049) 0:00:28.221 ******** 2026-01-05 00:21:19.720969 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:21:19.720986 | orchestrator | 2026-01-05 00:21:19.721003 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-05 00:21:19.721020 | orchestrator | Monday 05 January 2026 00:21:18 +0000 (0:00:00.059) 0:00:28.280 ******** 2026-01-05 00:21:19.721036 | orchestrator | changed: [testbed-manager] 2026-01-05 00:21:19.721053 | orchestrator | 2026-01-05 00:21:19.721069 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:21:19.721086 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:21:19.721105 | orchestrator | 2026-01-05 00:21:19.721121 | orchestrator | 2026-01-05 00:21:19.721138 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:21:19.721154 | orchestrator | Monday 05 January 2026 00:21:19 +0000 (0:00:00.683) 0:00:28.964 ******** 2026-01-05 00:21:19.721170 | orchestrator | =============================================================================== 2026-01-05 00:21:19.721187 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.34s 2026-01-05 00:21:19.721204 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.62s 2026-01-05 00:21:19.721221 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.31s 2026-01-05 00:21:19.721237 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-01-05 00:21:19.721252 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-01-05 00:21:19.721278 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-01-05 00:21:19.721296 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-01-05 00:21:19.721312 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:21:19.721329 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-05 00:21:19.721345 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-05 00:21:19.721361 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-05 00:21:19.721417 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-05 00:21:19.721435 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-05 00:21:19.721452 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-01-05 00:21:19.721468 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-01-05 00:21:19.721485 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-01-05 00:21:19.721502 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.68s 2026-01-05 00:21:19.721518 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.21s 2026-01-05 00:21:19.721536 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-01-05 00:21:19.721553 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-01-05 00:21:20.055977 | orchestrator | + osism apply squid 2026-01-05 00:21:32.359591 | orchestrator | 2026-01-05 00:21:32 | INFO  | Task 36f4a814-86ef-49c6-959b-5d164b490d65 (squid) was prepared for execution. 2026-01-05 00:21:32.359711 | orchestrator | 2026-01-05 00:21:32 | INFO  | It takes a moment until task 36f4a814-86ef-49c6-959b-5d164b490d65 (squid) has been started and output is visible here. 2026-01-05 00:23:26.277237 | orchestrator | 2026-01-05 00:23:26.277345 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-05 00:23:26.277357 | orchestrator | 2026-01-05 00:23:26.277364 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-05 00:23:26.277371 | orchestrator | Monday 05 January 2026 00:21:36 +0000 (0:00:00.147) 0:00:00.147 ******** 2026-01-05 00:23:26.277379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:23:26.277386 | orchestrator | 2026-01-05 00:23:26.277393 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-05 00:23:26.277399 | orchestrator | Monday 05 January 2026 00:21:36 +0000 (0:00:00.073) 0:00:00.220 ******** 2026-01-05 00:23:26.277406 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:26.277413 | orchestrator | 2026-01-05 00:23:26.277420 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-05 00:23:26.277426 | orchestrator | Monday 05 January 2026 00:21:37 +0000 (0:00:01.464) 0:00:01.685 ******** 2026-01-05 00:23:26.277433 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-05 00:23:26.277439 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-05 00:23:26.277446 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-05 00:23:26.277452 | orchestrator | 2026-01-05 00:23:26.277458 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-05 00:23:26.277465 | orchestrator | Monday 05 January 2026 00:21:38 +0000 (0:00:01.181) 0:00:02.867 ******** 2026-01-05 00:23:26.277471 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-05 00:23:26.277478 | orchestrator | 2026-01-05 00:23:26.277484 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-05 00:23:26.277490 | orchestrator | Monday 05 January 2026 00:21:40 +0000 (0:00:01.108) 0:00:03.976 ******** 2026-01-05 00:23:26.277497 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:26.277503 | orchestrator | 2026-01-05 00:23:26.277555 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-05 00:23:26.277562 | orchestrator | Monday 05 January 2026 00:21:40 +0000 (0:00:00.378) 0:00:04.354 ******** 2026-01-05 00:23:26.277569 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:26.277575 | orchestrator | 2026-01-05 00:23:26.277582 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-05 00:23:26.277588 | orchestrator | Monday 05 January 2026 00:21:41 +0000 (0:00:00.927) 0:00:05.282 ******** 2026-01-05 00:23:26.277594 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-05 00:23:26.277601 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:26.277608 | orchestrator | 2026-01-05 00:23:26.277614 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-05 00:23:26.277621 | orchestrator | Monday 05 January 2026 00:22:13 +0000 (0:00:31.885) 0:00:37.168 ******** 2026-01-05 00:23:26.277627 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:26.277633 | orchestrator | 2026-01-05 00:23:26.277640 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-05 00:23:26.277646 | orchestrator | Monday 05 January 2026 00:22:25 +0000 (0:00:12.026) 0:00:49.194 ******** 2026-01-05 00:23:26.277653 | orchestrator | Pausing for 60 seconds 2026-01-05 00:23:26.277660 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:26.277667 | orchestrator | 2026-01-05 00:23:26.277673 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-05 00:23:26.277679 | orchestrator | Monday 05 January 2026 00:23:25 +0000 (0:01:00.077) 0:01:49.272 ******** 2026-01-05 00:23:26.277686 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:26.277692 | orchestrator | 2026-01-05 00:23:26.277698 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-05 00:23:26.277729 | orchestrator | Monday 05 January 2026 00:23:25 +0000 (0:00:00.060) 0:01:49.332 ******** 2026-01-05 00:23:26.277736 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:26.277742 | orchestrator | 2026-01-05 00:23:26.277748 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:23:26.277755 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:23:26.277762 | orchestrator | 2026-01-05 00:23:26.277768 | orchestrator | 2026-01-05 00:23:26.277774 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:23:26.277781 | orchestrator | Monday 05 January 2026 00:23:25 +0000 (0:00:00.607) 0:01:49.939 ******** 2026-01-05 00:23:26.277788 | orchestrator | =============================================================================== 2026-01-05 00:23:26.277795 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-05 00:23:26.277803 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.89s 2026-01-05 00:23:26.277810 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.03s 2026-01-05 00:23:26.277817 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.46s 2026-01-05 00:23:26.277824 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2026-01-05 00:23:26.277831 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2026-01-05 00:23:26.277838 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-01-05 00:23:26.277845 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2026-01-05 00:23:26.277852 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-01-05 00:23:26.277859 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-01-05 00:23:26.277866 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-01-05 00:23:26.623499 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-05 00:23:26.623676 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-05 00:23:26.631201 | orchestrator | + set -e 2026-01-05 00:23:26.631258 | orchestrator | + NAMESPACE=kolla 2026-01-05 00:23:26.631274 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-05 00:23:26.635388 | orchestrator | ++ semver latest 9.0.0 2026-01-05 00:23:26.706089 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-05 00:23:26.706191 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-05 00:23:26.707728 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-05 00:23:38.874952 | orchestrator | 2026-01-05 00:23:38 | INFO  | Task 59e9565f-8b57-460a-9f76-4e2959b97ed1 (operator) was prepared for execution. 2026-01-05 00:23:38.875040 | orchestrator | 2026-01-05 00:23:38 | INFO  | It takes a moment until task 59e9565f-8b57-460a-9f76-4e2959b97ed1 (operator) has been started and output is visible here. 2026-01-05 00:23:55.257063 | orchestrator | 2026-01-05 00:23:55.257198 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-05 00:23:55.257227 | orchestrator | 2026-01-05 00:23:55.257240 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:23:55.257252 | orchestrator | Monday 05 January 2026 00:23:43 +0000 (0:00:00.152) 0:00:00.152 ******** 2026-01-05 00:23:55.257265 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:23:55.257278 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:23:55.257289 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:23:55.257359 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:23:55.257371 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:23:55.257386 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:23:55.257397 | orchestrator | 2026-01-05 00:23:55.257408 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-05 00:23:55.257419 | orchestrator | Monday 05 January 2026 00:23:46 +0000 (0:00:03.417) 0:00:03.569 ******** 2026-01-05 00:23:55.257458 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:23:55.257470 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:23:55.257481 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:23:55.257491 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:23:55.257502 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:23:55.257512 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:23:55.257523 | orchestrator | 2026-01-05 00:23:55.257534 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-05 00:23:55.257595 | orchestrator | 2026-01-05 00:23:55.257610 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-05 00:23:55.257644 | orchestrator | Monday 05 January 2026 00:23:47 +0000 (0:00:00.758) 0:00:04.328 ******** 2026-01-05 00:23:55.257657 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:23:55.257670 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:23:55.257682 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:23:55.257695 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:23:55.257713 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:23:55.257727 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:23:55.257740 | orchestrator | 2026-01-05 00:23:55.257754 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-05 00:23:55.257767 | orchestrator | Monday 05 January 2026 00:23:47 +0000 (0:00:00.193) 0:00:04.521 ******** 2026-01-05 00:23:55.257780 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:23:55.257792 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:23:55.257804 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:23:55.257816 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:23:55.257829 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:23:55.257841 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:23:55.257854 | orchestrator | 2026-01-05 00:23:55.257866 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-05 00:23:55.257879 | orchestrator | Monday 05 January 2026 00:23:47 +0000 (0:00:00.182) 0:00:04.704 ******** 2026-01-05 00:23:55.257892 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:55.257904 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:55.257915 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:55.257926 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:55.257936 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:55.257947 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:55.257958 | orchestrator | 2026-01-05 00:23:55.257969 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-05 00:23:55.257979 | orchestrator | Monday 05 January 2026 00:23:48 +0000 (0:00:00.612) 0:00:05.316 ******** 2026-01-05 00:23:55.257990 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:55.258000 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:55.258011 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:55.258086 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:55.258097 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:55.258123 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:55.258134 | orchestrator | 2026-01-05 00:23:55.258157 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-05 00:23:55.258168 | orchestrator | Monday 05 January 2026 00:23:49 +0000 (0:00:00.812) 0:00:06.128 ******** 2026-01-05 00:23:55.258179 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-05 00:23:55.258190 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-05 00:23:55.258201 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-05 00:23:55.258212 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-05 00:23:55.258222 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-05 00:23:55.258233 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-05 00:23:55.258255 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-05 00:23:55.258266 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-05 00:23:55.258277 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-05 00:23:55.258287 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-05 00:23:55.258307 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-05 00:23:55.258318 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-05 00:23:55.258328 | orchestrator | 2026-01-05 00:23:55.258339 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-05 00:23:55.258350 | orchestrator | Monday 05 January 2026 00:23:50 +0000 (0:00:01.160) 0:00:07.288 ******** 2026-01-05 00:23:55.258361 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:55.258372 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:55.258382 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:55.258393 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:55.258403 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:55.258414 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:55.258424 | orchestrator | 2026-01-05 00:23:55.258435 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-05 00:23:55.258447 | orchestrator | Monday 05 January 2026 00:23:51 +0000 (0:00:01.271) 0:00:08.560 ******** 2026-01-05 00:23:55.258458 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-05 00:23:55.258468 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-05 00:23:55.258479 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-05 00:23:55.258490 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:55.258521 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:55.258533 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:55.258571 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:55.258583 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:55.258594 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:55.258605 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:55.258616 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:55.258626 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:55.258637 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:55.258648 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:55.258658 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:55.258669 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:55.258680 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:55.258690 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:55.258701 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:55.258711 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:55.258722 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:55.258733 | orchestrator | 2026-01-05 00:23:55.258744 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-05 00:23:55.258755 | orchestrator | Monday 05 January 2026 00:23:52 +0000 (0:00:01.326) 0:00:09.886 ******** 2026-01-05 00:23:55.258766 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:55.258777 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:55.258788 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:55.258798 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:55.258809 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:55.258820 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:55.258830 | orchestrator | 2026-01-05 00:23:55.258841 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-05 00:23:55.258852 | orchestrator | Monday 05 January 2026 00:23:53 +0000 (0:00:00.166) 0:00:10.053 ******** 2026-01-05 00:23:55.258870 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:55.258881 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:55.258891 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:55.258902 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:55.258917 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:55.258934 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:55.258953 | orchestrator | 2026-01-05 00:23:55.258971 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-05 00:23:55.258987 | orchestrator | Monday 05 January 2026 00:23:53 +0000 (0:00:00.189) 0:00:10.243 ******** 2026-01-05 00:23:55.259004 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:55.259021 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:55.259038 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:55.259058 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:55.259076 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:55.259093 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:55.259104 | orchestrator | 2026-01-05 00:23:55.259115 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-05 00:23:55.259126 | orchestrator | Monday 05 January 2026 00:23:53 +0000 (0:00:00.590) 0:00:10.833 ******** 2026-01-05 00:23:55.259137 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:55.259148 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:55.259158 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:55.259169 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:55.259179 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:55.259190 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:55.259201 | orchestrator | 2026-01-05 00:23:55.259211 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-05 00:23:55.259222 | orchestrator | Monday 05 January 2026 00:23:54 +0000 (0:00:00.179) 0:00:11.012 ******** 2026-01-05 00:23:55.259233 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:23:55.259244 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:55.259255 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:23:55.259265 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:55.259276 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:23:55.259287 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:55.259297 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:23:55.259308 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:55.259318 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 00:23:55.259329 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:55.259340 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 00:23:55.259350 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:55.259411 | orchestrator | 2026-01-05 00:23:55.259423 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-05 00:23:55.259434 | orchestrator | Monday 05 January 2026 00:23:54 +0000 (0:00:00.808) 0:00:11.821 ******** 2026-01-05 00:23:55.259444 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:55.259455 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:55.259466 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:55.259477 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:55.259487 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:55.259498 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:55.259508 | orchestrator | 2026-01-05 00:23:55.259519 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-05 00:23:55.259530 | orchestrator | Monday 05 January 2026 00:23:55 +0000 (0:00:00.201) 0:00:12.023 ******** 2026-01-05 00:23:55.259564 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:55.259576 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:55.259587 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:55.259598 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:55.259619 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:56.652328 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:56.652471 | orchestrator | 2026-01-05 00:23:56.652489 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-05 00:23:56.652502 | orchestrator | Monday 05 January 2026 00:23:55 +0000 (0:00:00.195) 0:00:12.218 ******** 2026-01-05 00:23:56.652513 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:56.652524 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:56.652535 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:56.652637 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:56.652656 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:56.652669 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:56.652686 | orchestrator | 2026-01-05 00:23:56.652704 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-05 00:23:56.652722 | orchestrator | Monday 05 January 2026 00:23:55 +0000 (0:00:00.184) 0:00:12.402 ******** 2026-01-05 00:23:56.652740 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:56.652760 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:56.652777 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:56.652795 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:56.652807 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:56.652817 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:56.652828 | orchestrator | 2026-01-05 00:23:56.652838 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-05 00:23:56.652868 | orchestrator | Monday 05 January 2026 00:23:56 +0000 (0:00:00.701) 0:00:13.103 ******** 2026-01-05 00:23:56.652881 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:56.652894 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:56.652907 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:56.652919 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:56.652936 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:56.652947 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:56.652957 | orchestrator | 2026-01-05 00:23:56.652967 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:23:56.652980 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:56.652993 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:56.653004 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:56.653015 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:56.653025 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:56.653036 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:56.653046 | orchestrator | 2026-01-05 00:23:56.653057 | orchestrator | 2026-01-05 00:23:56.653068 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:23:56.653078 | orchestrator | Monday 05 January 2026 00:23:56 +0000 (0:00:00.233) 0:00:13.337 ******** 2026-01-05 00:23:56.653089 | orchestrator | =============================================================================== 2026-01-05 00:23:56.653100 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2026-01-05 00:23:56.653110 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.33s 2026-01-05 00:23:56.653122 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2026-01-05 00:23:56.653132 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-01-05 00:23:56.653158 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-01-05 00:23:56.653169 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.81s 2026-01-05 00:23:56.653179 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2026-01-05 00:23:56.653190 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2026-01-05 00:23:56.653201 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-01-05 00:23:56.653211 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-01-05 00:23:56.653222 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-01-05 00:23:56.653233 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2026-01-05 00:23:56.653244 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2026-01-05 00:23:56.653255 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-01-05 00:23:56.653265 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-01-05 00:23:56.653276 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-01-05 00:23:56.653287 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-01-05 00:23:56.653297 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-01-05 00:23:56.653308 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-01-05 00:23:56.987251 | orchestrator | + osism apply --environment custom facts 2026-01-05 00:23:58.961040 | orchestrator | 2026-01-05 00:23:58 | INFO  | Trying to run play facts in environment custom 2026-01-05 00:24:09.191076 | orchestrator | 2026-01-05 00:24:09 | INFO  | Task c0f8382a-685b-445e-bcd4-dd0c7d1ac528 (facts) was prepared for execution. 2026-01-05 00:24:09.191199 | orchestrator | 2026-01-05 00:24:09 | INFO  | It takes a moment until task c0f8382a-685b-445e-bcd4-dd0c7d1ac528 (facts) has been started and output is visible here. 2026-01-05 00:24:53.980099 | orchestrator | 2026-01-05 00:24:53.980231 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-05 00:24:53.980248 | orchestrator | 2026-01-05 00:24:53.980261 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:24:53.980272 | orchestrator | Monday 05 January 2026 00:24:13 +0000 (0:00:00.090) 0:00:00.090 ******** 2026-01-05 00:24:53.980283 | orchestrator | ok: [testbed-manager] 2026-01-05 00:24:53.980295 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:53.980307 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:24:53.980317 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:24:53.980328 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:53.980338 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:53.980349 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:24:53.980360 | orchestrator | 2026-01-05 00:24:53.980371 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-05 00:24:53.980382 | orchestrator | Monday 05 January 2026 00:24:14 +0000 (0:00:01.391) 0:00:01.481 ******** 2026-01-05 00:24:53.980392 | orchestrator | ok: [testbed-manager] 2026-01-05 00:24:53.980403 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:24:53.980435 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:24:53.980446 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:24:53.980459 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:53.980469 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:53.980480 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:53.980491 | orchestrator | 2026-01-05 00:24:53.980502 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-05 00:24:53.980513 | orchestrator | 2026-01-05 00:24:53.980523 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:24:53.980556 | orchestrator | Monday 05 January 2026 00:24:15 +0000 (0:00:01.190) 0:00:02.671 ******** 2026-01-05 00:24:53.980567 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:53.980578 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:53.980589 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:53.980629 | orchestrator | 2026-01-05 00:24:53.980642 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:24:53.980656 | orchestrator | Monday 05 January 2026 00:24:16 +0000 (0:00:00.126) 0:00:02.798 ******** 2026-01-05 00:24:53.980669 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:53.980681 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:53.980693 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:53.980707 | orchestrator | 2026-01-05 00:24:53.980719 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:24:53.980732 | orchestrator | Monday 05 January 2026 00:24:16 +0000 (0:00:00.199) 0:00:02.997 ******** 2026-01-05 00:24:53.980744 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:53.980756 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:53.980766 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:53.980777 | orchestrator | 2026-01-05 00:24:53.980787 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:24:53.980798 | orchestrator | Monday 05 January 2026 00:24:16 +0000 (0:00:00.223) 0:00:03.221 ******** 2026-01-05 00:24:53.980810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:24:53.980823 | orchestrator | 2026-01-05 00:24:53.980834 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:24:53.980845 | orchestrator | Monday 05 January 2026 00:24:16 +0000 (0:00:00.171) 0:00:03.393 ******** 2026-01-05 00:24:53.980855 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:53.980866 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:53.980876 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:53.980887 | orchestrator | 2026-01-05 00:24:53.980897 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:24:53.980908 | orchestrator | Monday 05 January 2026 00:24:17 +0000 (0:00:00.429) 0:00:03.823 ******** 2026-01-05 00:24:53.980919 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:24:53.980929 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:24:53.980940 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:24:53.980950 | orchestrator | 2026-01-05 00:24:53.980961 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:24:53.980972 | orchestrator | Monday 05 January 2026 00:24:17 +0000 (0:00:00.155) 0:00:03.979 ******** 2026-01-05 00:24:53.980982 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:53.980993 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:53.981003 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:53.981014 | orchestrator | 2026-01-05 00:24:53.981024 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:24:53.981035 | orchestrator | Monday 05 January 2026 00:24:18 +0000 (0:00:01.104) 0:00:05.083 ******** 2026-01-05 00:24:53.981045 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:53.981056 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:53.981066 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:53.981077 | orchestrator | 2026-01-05 00:24:53.981088 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:24:53.981099 | orchestrator | Monday 05 January 2026 00:24:18 +0000 (0:00:00.445) 0:00:05.529 ******** 2026-01-05 00:24:53.981109 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:53.981120 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:53.981130 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:53.981141 | orchestrator | 2026-01-05 00:24:53.981152 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:24:53.981162 | orchestrator | Monday 05 January 2026 00:24:19 +0000 (0:00:01.069) 0:00:06.599 ******** 2026-01-05 00:24:53.981173 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:53.981192 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:53.981203 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:53.981214 | orchestrator | 2026-01-05 00:24:53.981225 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-05 00:24:53.981235 | orchestrator | Monday 05 January 2026 00:24:36 +0000 (0:00:16.133) 0:00:22.732 ******** 2026-01-05 00:24:53.981246 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:24:53.981256 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:24:53.981267 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:24:53.981278 | orchestrator | 2026-01-05 00:24:53.981288 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-05 00:24:53.981317 | orchestrator | Monday 05 January 2026 00:24:36 +0000 (0:00:00.100) 0:00:22.832 ******** 2026-01-05 00:24:53.981328 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:53.981339 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:53.981350 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:53.981360 | orchestrator | 2026-01-05 00:24:53.981371 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:24:53.981381 | orchestrator | Monday 05 January 2026 00:24:43 +0000 (0:00:07.740) 0:00:30.573 ******** 2026-01-05 00:24:53.981392 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:53.981403 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:53.981413 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:53.981424 | orchestrator | 2026-01-05 00:24:53.981435 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-05 00:24:53.981445 | orchestrator | Monday 05 January 2026 00:24:44 +0000 (0:00:00.468) 0:00:31.041 ******** 2026-01-05 00:24:53.981456 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-05 00:24:53.981467 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-05 00:24:53.981478 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-05 00:24:53.981488 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-05 00:24:53.981499 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-05 00:24:53.981510 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-05 00:24:53.981521 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-05 00:24:53.981532 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-05 00:24:53.981542 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-05 00:24:53.981553 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:24:53.981564 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:24:53.981575 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:24:53.981585 | orchestrator | 2026-01-05 00:24:53.981659 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:24:53.981672 | orchestrator | Monday 05 January 2026 00:24:47 +0000 (0:00:03.589) 0:00:34.631 ******** 2026-01-05 00:24:53.981683 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:53.981693 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:53.981704 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:53.981715 | orchestrator | 2026-01-05 00:24:53.981725 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:24:53.981736 | orchestrator | 2026-01-05 00:24:53.981746 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:24:53.981757 | orchestrator | Monday 05 January 2026 00:24:49 +0000 (0:00:01.313) 0:00:35.944 ******** 2026-01-05 00:24:53.981768 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:24:53.981778 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:24:53.981789 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:24:53.981799 | orchestrator | ok: [testbed-manager] 2026-01-05 00:24:53.981810 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:53.981827 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:53.981838 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:53.981848 | orchestrator | 2026-01-05 00:24:53.981859 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:24:53.981871 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:24:53.981882 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:24:53.981894 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:24:53.981905 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:24:53.981916 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:24:53.981927 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:24:53.981938 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:24:53.981948 | orchestrator | 2026-01-05 00:24:53.981959 | orchestrator | 2026-01-05 00:24:53.981970 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:24:53.981980 | orchestrator | Monday 05 January 2026 00:24:53 +0000 (0:00:04.686) 0:00:40.630 ******** 2026-01-05 00:24:53.981991 | orchestrator | =============================================================================== 2026-01-05 00:24:53.982002 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.13s 2026-01-05 00:24:53.982012 | orchestrator | Install required packages (Debian) -------------------------------------- 7.74s 2026-01-05 00:24:53.982090 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.69s 2026-01-05 00:24:53.982101 | orchestrator | Copy fact files --------------------------------------------------------- 3.59s 2026-01-05 00:24:53.982112 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2026-01-05 00:24:53.982123 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2026-01-05 00:24:53.982142 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2026-01-05 00:24:54.248347 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.10s 2026-01-05 00:24:54.248460 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-01-05 00:24:54.248474 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-01-05 00:24:54.248486 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-01-05 00:24:54.248497 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-01-05 00:24:54.248508 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-05 00:24:54.248519 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-01-05 00:24:54.248530 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2026-01-05 00:24:54.248568 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-01-05 00:24:54.248580 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2026-01-05 00:24:54.248650 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-05 00:24:54.582923 | orchestrator | + osism apply bootstrap 2026-01-05 00:25:06.850377 | orchestrator | 2026-01-05 00:25:06 | INFO  | Task cfb05ec2-3798-48fd-b809-d3386d562ff8 (bootstrap) was prepared for execution. 2026-01-05 00:25:06.850494 | orchestrator | 2026-01-05 00:25:06 | INFO  | It takes a moment until task cfb05ec2-3798-48fd-b809-d3386d562ff8 (bootstrap) has been started and output is visible here. 2026-01-05 00:25:23.272777 | orchestrator | 2026-01-05 00:25:23.272905 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-05 00:25:23.272922 | orchestrator | 2026-01-05 00:25:23.272933 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-05 00:25:23.272944 | orchestrator | Monday 05 January 2026 00:25:11 +0000 (0:00:00.155) 0:00:00.155 ******** 2026-01-05 00:25:23.272955 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:23.272967 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:23.272978 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:23.272989 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:23.272999 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:23.273010 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:23.273021 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:23.273031 | orchestrator | 2026-01-05 00:25:23.273042 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:25:23.273053 | orchestrator | 2026-01-05 00:25:23.273063 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:25:23.273074 | orchestrator | Monday 05 January 2026 00:25:11 +0000 (0:00:00.273) 0:00:00.429 ******** 2026-01-05 00:25:23.273085 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:23.273095 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:23.273107 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:23.273118 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:23.273128 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:23.273139 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:23.273149 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:23.273160 | orchestrator | 2026-01-05 00:25:23.273171 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-05 00:25:23.273181 | orchestrator | 2026-01-05 00:25:23.273192 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:25:23.273203 | orchestrator | Monday 05 January 2026 00:25:15 +0000 (0:00:03.864) 0:00:04.293 ******** 2026-01-05 00:25:23.273214 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-05 00:25:23.273225 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-05 00:25:23.273236 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-05 00:25:23.273247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-05 00:25:23.273257 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-05 00:25:23.273268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:25:23.273278 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 00:25:23.273289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:25:23.273300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-05 00:25:23.273310 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-05 00:25:23.273321 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-05 00:25:23.273332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:25:23.273343 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-05 00:25:23.273353 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:23.273364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:25:23.273375 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-05 00:25:23.273385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-05 00:25:23.273396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:25:23.273406 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-05 00:25:23.273417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 00:25:23.273454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:25:23.273466 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:23.273476 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-05 00:25:23.273487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-05 00:25:23.273497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 00:25:23.273508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-05 00:25:23.273519 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 00:25:23.273529 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:23.273540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-05 00:25:23.273550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-05 00:25:23.273561 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-05 00:25:23.273571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-05 00:25:23.273582 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 00:25:23.273592 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-05 00:25:23.273603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-05 00:25:23.273613 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 00:25:23.273655 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-05 00:25:23.273667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-05 00:25:23.273677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:25:23.273688 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 00:25:23.273699 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:23.273709 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-05 00:25:23.273720 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-05 00:25:23.273731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:25:23.273741 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-05 00:25:23.273752 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 00:25:23.273781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:25:23.273792 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:23.273803 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-05 00:25:23.273813 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 00:25:23.273824 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 00:25:23.273834 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 00:25:23.273845 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:23.273855 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 00:25:23.273866 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 00:25:23.273876 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:23.273887 | orchestrator | 2026-01-05 00:25:23.273897 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-05 00:25:23.273908 | orchestrator | 2026-01-05 00:25:23.273919 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-05 00:25:23.273930 | orchestrator | Monday 05 January 2026 00:25:15 +0000 (0:00:00.478) 0:00:04.771 ******** 2026-01-05 00:25:23.273940 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:23.273951 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:23.273961 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:23.273972 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:23.273982 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:23.273993 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:23.274003 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:23.274071 | orchestrator | 2026-01-05 00:25:23.274084 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-05 00:25:23.274107 | orchestrator | Monday 05 January 2026 00:25:16 +0000 (0:00:01.203) 0:00:05.975 ******** 2026-01-05 00:25:23.274117 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:23.274128 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:23.274139 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:23.274149 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:23.274160 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:23.274171 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:23.274181 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:23.274192 | orchestrator | 2026-01-05 00:25:23.274203 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-05 00:25:23.274213 | orchestrator | Monday 05 January 2026 00:25:18 +0000 (0:00:01.241) 0:00:07.217 ******** 2026-01-05 00:25:23.274225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:23.274239 | orchestrator | 2026-01-05 00:25:23.274250 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-05 00:25:23.274261 | orchestrator | Monday 05 January 2026 00:25:18 +0000 (0:00:00.286) 0:00:07.503 ******** 2026-01-05 00:25:23.274272 | orchestrator | changed: [testbed-manager] 2026-01-05 00:25:23.274282 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:23.274293 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:23.274303 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:23.274314 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:23.274325 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:23.274335 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:23.274346 | orchestrator | 2026-01-05 00:25:23.274356 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-05 00:25:23.274367 | orchestrator | Monday 05 January 2026 00:25:20 +0000 (0:00:02.117) 0:00:09.621 ******** 2026-01-05 00:25:23.274398 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:23.274411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:23.274423 | orchestrator | 2026-01-05 00:25:23.274434 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-05 00:25:23.274445 | orchestrator | Monday 05 January 2026 00:25:20 +0000 (0:00:00.274) 0:00:09.896 ******** 2026-01-05 00:25:23.274456 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:23.274466 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:23.274477 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:23.274487 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:23.274498 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:23.274509 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:23.274519 | orchestrator | 2026-01-05 00:25:23.274530 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-05 00:25:23.274541 | orchestrator | Monday 05 January 2026 00:25:22 +0000 (0:00:01.151) 0:00:11.048 ******** 2026-01-05 00:25:23.274551 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:23.274562 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:23.274572 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:23.274583 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:23.274593 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:23.274603 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:23.274614 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:23.274664 | orchestrator | 2026-01-05 00:25:23.274676 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-05 00:25:23.274693 | orchestrator | Monday 05 January 2026 00:25:22 +0000 (0:00:00.627) 0:00:11.676 ******** 2026-01-05 00:25:23.274704 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:23.274722 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:23.274733 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:23.274743 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:23.274754 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:23.274764 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:23.274775 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:23.274786 | orchestrator | 2026-01-05 00:25:23.274797 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-05 00:25:23.274808 | orchestrator | Monday 05 January 2026 00:25:23 +0000 (0:00:00.433) 0:00:12.109 ******** 2026-01-05 00:25:23.274819 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:23.274830 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:23.274849 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:35.631388 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:35.631524 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:35.631540 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:35.631552 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:35.631564 | orchestrator | 2026-01-05 00:25:35.631577 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-05 00:25:35.631590 | orchestrator | Monday 05 January 2026 00:25:23 +0000 (0:00:00.220) 0:00:12.330 ******** 2026-01-05 00:25:35.632371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:35.632408 | orchestrator | 2026-01-05 00:25:35.632420 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-05 00:25:35.632433 | orchestrator | Monday 05 January 2026 00:25:23 +0000 (0:00:00.304) 0:00:12.634 ******** 2026-01-05 00:25:35.632450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:35.632469 | orchestrator | 2026-01-05 00:25:35.632498 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-05 00:25:35.632518 | orchestrator | Monday 05 January 2026 00:25:23 +0000 (0:00:00.326) 0:00:12.961 ******** 2026-01-05 00:25:35.632536 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.632557 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.632575 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:35.632593 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:35.632613 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.632676 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.632693 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:35.632704 | orchestrator | 2026-01-05 00:25:35.632715 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-05 00:25:35.632727 | orchestrator | Monday 05 January 2026 00:25:25 +0000 (0:00:01.478) 0:00:14.439 ******** 2026-01-05 00:25:35.632738 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:35.632750 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:35.632762 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:35.632773 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:35.632784 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:35.632795 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:35.632806 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:35.632817 | orchestrator | 2026-01-05 00:25:35.632828 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-05 00:25:35.632839 | orchestrator | Monday 05 January 2026 00:25:25 +0000 (0:00:00.264) 0:00:14.704 ******** 2026-01-05 00:25:35.632850 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.632860 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.632871 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.632882 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.632893 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:35.632926 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:35.632937 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:35.632948 | orchestrator | 2026-01-05 00:25:35.632958 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-05 00:25:35.632969 | orchestrator | Monday 05 January 2026 00:25:26 +0000 (0:00:00.547) 0:00:15.252 ******** 2026-01-05 00:25:35.632980 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:35.632991 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:35.633001 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:35.633012 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:35.633023 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:35.633033 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:35.633044 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:35.633054 | orchestrator | 2026-01-05 00:25:35.633065 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-05 00:25:35.633077 | orchestrator | Monday 05 January 2026 00:25:26 +0000 (0:00:00.367) 0:00:15.619 ******** 2026-01-05 00:25:35.633088 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.633099 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:35.633109 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:35.633120 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:35.633131 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:35.633141 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:35.633152 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:35.633162 | orchestrator | 2026-01-05 00:25:35.633173 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-05 00:25:35.633184 | orchestrator | Monday 05 January 2026 00:25:27 +0000 (0:00:00.555) 0:00:16.174 ******** 2026-01-05 00:25:35.633195 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.633205 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:35.633216 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:35.633227 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:35.633237 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:35.633248 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:35.633258 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:35.633269 | orchestrator | 2026-01-05 00:25:35.633287 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-05 00:25:35.633298 | orchestrator | Monday 05 January 2026 00:25:28 +0000 (0:00:01.126) 0:00:17.301 ******** 2026-01-05 00:25:35.633309 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.633320 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:35.633330 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:35.633342 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.633353 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:35.633363 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.633374 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.633385 | orchestrator | 2026-01-05 00:25:35.633396 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-05 00:25:35.633407 | orchestrator | Monday 05 January 2026 00:25:29 +0000 (0:00:01.070) 0:00:18.372 ******** 2026-01-05 00:25:35.633439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:35.633451 | orchestrator | 2026-01-05 00:25:35.633462 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-05 00:25:35.633473 | orchestrator | Monday 05 January 2026 00:25:29 +0000 (0:00:00.340) 0:00:18.712 ******** 2026-01-05 00:25:35.633483 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:35.633494 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:35.633504 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:35.633515 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:35.633526 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:35.633544 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:35.633555 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:35.633565 | orchestrator | 2026-01-05 00:25:35.633576 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:25:35.633587 | orchestrator | Monday 05 January 2026 00:25:31 +0000 (0:00:01.343) 0:00:20.056 ******** 2026-01-05 00:25:35.633598 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.633608 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.633619 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.633649 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.633660 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:35.633671 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:35.633682 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:35.633692 | orchestrator | 2026-01-05 00:25:35.633703 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:25:35.633714 | orchestrator | Monday 05 January 2026 00:25:31 +0000 (0:00:00.225) 0:00:20.281 ******** 2026-01-05 00:25:35.633725 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.633735 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.633746 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.633757 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.633768 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:35.633778 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:35.633789 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:35.633800 | orchestrator | 2026-01-05 00:25:35.633811 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:25:35.633821 | orchestrator | Monday 05 January 2026 00:25:31 +0000 (0:00:00.227) 0:00:20.509 ******** 2026-01-05 00:25:35.633832 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.633843 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.633853 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.633864 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.633875 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:35.633885 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:35.633896 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:35.633906 | orchestrator | 2026-01-05 00:25:35.633917 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:25:35.633928 | orchestrator | Monday 05 January 2026 00:25:31 +0000 (0:00:00.243) 0:00:20.752 ******** 2026-01-05 00:25:35.633940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:35.633953 | orchestrator | 2026-01-05 00:25:35.633964 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:25:35.633975 | orchestrator | Monday 05 January 2026 00:25:32 +0000 (0:00:00.312) 0:00:21.064 ******** 2026-01-05 00:25:35.633985 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.633996 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.634007 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.634088 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.634103 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:35.634114 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:35.634124 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:35.634135 | orchestrator | 2026-01-05 00:25:35.634146 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:25:35.634157 | orchestrator | Monday 05 January 2026 00:25:32 +0000 (0:00:00.518) 0:00:21.583 ******** 2026-01-05 00:25:35.634167 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:35.634178 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:35.634189 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:35.634200 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:35.634210 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:35.634221 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:35.634231 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:35.634249 | orchestrator | 2026-01-05 00:25:35.634260 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:25:35.634271 | orchestrator | Monday 05 January 2026 00:25:32 +0000 (0:00:00.222) 0:00:21.806 ******** 2026-01-05 00:25:35.634282 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.634292 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.634303 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.634314 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.634324 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:35.634335 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:35.634346 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:35.634357 | orchestrator | 2026-01-05 00:25:35.634367 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:25:35.634379 | orchestrator | Monday 05 January 2026 00:25:33 +0000 (0:00:01.072) 0:00:22.879 ******** 2026-01-05 00:25:35.634389 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.634400 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.634411 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.634422 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:35.634433 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:35.634443 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.634454 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:35.634464 | orchestrator | 2026-01-05 00:25:35.634475 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:25:35.634486 | orchestrator | Monday 05 January 2026 00:25:34 +0000 (0:00:00.581) 0:00:23.460 ******** 2026-01-05 00:25:35.634497 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:35.634508 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:35.634518 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:35.634529 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:35.634548 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:17.452164 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:17.452280 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:17.452290 | orchestrator | 2026-01-05 00:26:17.452299 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:26:17.452308 | orchestrator | Monday 05 January 2026 00:25:35 +0000 (0:00:01.136) 0:00:24.596 ******** 2026-01-05 00:26:17.452315 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.452323 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.452329 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.452335 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:17.452343 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:17.452350 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:17.452357 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:17.452364 | orchestrator | 2026-01-05 00:26:17.452371 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-05 00:26:17.452378 | orchestrator | Monday 05 January 2026 00:25:51 +0000 (0:00:16.253) 0:00:40.850 ******** 2026-01-05 00:26:17.452385 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.452392 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.452400 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.452407 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.452414 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.452421 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.452428 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.452434 | orchestrator | 2026-01-05 00:26:17.452441 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-05 00:26:17.452448 | orchestrator | Monday 05 January 2026 00:25:52 +0000 (0:00:00.299) 0:00:41.149 ******** 2026-01-05 00:26:17.452455 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.452462 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.452469 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.452476 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.452482 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.452487 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.452494 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.452525 | orchestrator | 2026-01-05 00:26:17.452533 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-05 00:26:17.452540 | orchestrator | Monday 05 January 2026 00:25:52 +0000 (0:00:00.256) 0:00:41.406 ******** 2026-01-05 00:26:17.452546 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.452552 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.452558 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.452564 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.452571 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.452578 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.452603 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.452609 | orchestrator | 2026-01-05 00:26:17.452616 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-05 00:26:17.452622 | orchestrator | Monday 05 January 2026 00:25:52 +0000 (0:00:00.233) 0:00:41.640 ******** 2026-01-05 00:26:17.452631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:26:17.452640 | orchestrator | 2026-01-05 00:26:17.452647 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-05 00:26:17.452654 | orchestrator | Monday 05 January 2026 00:25:52 +0000 (0:00:00.305) 0:00:41.945 ******** 2026-01-05 00:26:17.452691 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.452700 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.452708 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.452715 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.452722 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.452728 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.452736 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.452743 | orchestrator | 2026-01-05 00:26:17.452750 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-05 00:26:17.452758 | orchestrator | Monday 05 January 2026 00:25:54 +0000 (0:00:01.820) 0:00:43.766 ******** 2026-01-05 00:26:17.452764 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:17.452770 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:17.452777 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:17.452783 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:17.452789 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:17.452795 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:17.452800 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:17.452806 | orchestrator | 2026-01-05 00:26:17.452813 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-05 00:26:17.452819 | orchestrator | Monday 05 January 2026 00:25:55 +0000 (0:00:01.067) 0:00:44.834 ******** 2026-01-05 00:26:17.452826 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.452833 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.452840 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.452847 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.452854 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.452861 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.452867 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.452874 | orchestrator | 2026-01-05 00:26:17.452881 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-05 00:26:17.452888 | orchestrator | Monday 05 January 2026 00:25:56 +0000 (0:00:00.871) 0:00:45.705 ******** 2026-01-05 00:26:17.452901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:26:17.452909 | orchestrator | 2026-01-05 00:26:17.452916 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-05 00:26:17.452924 | orchestrator | Monday 05 January 2026 00:25:57 +0000 (0:00:00.288) 0:00:45.994 ******** 2026-01-05 00:26:17.452931 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:17.452946 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:17.452953 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:17.452960 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:17.452967 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:17.452974 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:17.452980 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:17.452986 | orchestrator | 2026-01-05 00:26:17.453010 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-05 00:26:17.453017 | orchestrator | Monday 05 January 2026 00:25:58 +0000 (0:00:01.067) 0:00:47.061 ******** 2026-01-05 00:26:17.453023 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:26:17.453029 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:26:17.453035 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:26:17.453041 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:26:17.453047 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:26:17.453053 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:26:17.453058 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:26:17.453064 | orchestrator | 2026-01-05 00:26:17.453071 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-05 00:26:17.453077 | orchestrator | Monday 05 January 2026 00:25:58 +0000 (0:00:00.231) 0:00:47.293 ******** 2026-01-05 00:26:17.453084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:26:17.453091 | orchestrator | 2026-01-05 00:26:17.453098 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-05 00:26:17.453104 | orchestrator | Monday 05 January 2026 00:25:58 +0000 (0:00:00.312) 0:00:47.605 ******** 2026-01-05 00:26:17.453110 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.453117 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.453124 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.453130 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.453137 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.453143 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.453150 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.453156 | orchestrator | 2026-01-05 00:26:17.453163 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-05 00:26:17.453169 | orchestrator | Monday 05 January 2026 00:26:00 +0000 (0:00:01.624) 0:00:49.229 ******** 2026-01-05 00:26:17.453175 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:17.453182 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:17.453189 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:17.453196 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:17.453202 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:17.453208 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:17.453214 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:17.453219 | orchestrator | 2026-01-05 00:26:17.453225 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-05 00:26:17.453232 | orchestrator | Monday 05 January 2026 00:26:01 +0000 (0:00:01.187) 0:00:50.417 ******** 2026-01-05 00:26:17.453238 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:17.453244 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:17.453250 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:17.453256 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:17.453262 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:17.453267 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:17.453273 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:17.453279 | orchestrator | 2026-01-05 00:26:17.453285 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-05 00:26:17.453291 | orchestrator | Monday 05 January 2026 00:26:14 +0000 (0:00:12.952) 0:01:03.370 ******** 2026-01-05 00:26:17.453297 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.453311 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.453317 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.453323 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.453329 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.453335 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.453340 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.453346 | orchestrator | 2026-01-05 00:26:17.453351 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-05 00:26:17.453358 | orchestrator | Monday 05 January 2026 00:26:15 +0000 (0:00:01.259) 0:01:04.630 ******** 2026-01-05 00:26:17.453364 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.453370 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.453377 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.453383 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.453390 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.453396 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.453403 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.453410 | orchestrator | 2026-01-05 00:26:17.453417 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-05 00:26:17.453423 | orchestrator | Monday 05 January 2026 00:26:16 +0000 (0:00:01.014) 0:01:05.644 ******** 2026-01-05 00:26:17.453430 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.453437 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.453444 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.453451 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.453458 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.453464 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.453471 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.453478 | orchestrator | 2026-01-05 00:26:17.453485 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-05 00:26:17.453492 | orchestrator | Monday 05 January 2026 00:26:16 +0000 (0:00:00.242) 0:01:05.887 ******** 2026-01-05 00:26:17.453499 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:17.453506 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:17.453518 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:17.453525 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:17.453531 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:17.453537 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:17.453543 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:17.453549 | orchestrator | 2026-01-05 00:26:17.453556 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-05 00:26:17.453562 | orchestrator | Monday 05 January 2026 00:26:17 +0000 (0:00:00.239) 0:01:06.126 ******** 2026-01-05 00:26:17.453570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:26:17.453578 | orchestrator | 2026-01-05 00:26:17.453594 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-05 00:28:43.140248 | orchestrator | Monday 05 January 2026 00:26:17 +0000 (0:00:00.293) 0:01:06.419 ******** 2026-01-05 00:28:43.140395 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:43.140414 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:43.140425 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:43.140439 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:43.140458 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:43.140476 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:43.140494 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:43.140512 | orchestrator | 2026-01-05 00:28:43.140532 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-05 00:28:43.140551 | orchestrator | Monday 05 January 2026 00:26:19 +0000 (0:00:01.876) 0:01:08.296 ******** 2026-01-05 00:28:43.140571 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:43.140591 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:43.140610 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:43.140625 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:43.140676 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:43.140688 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:43.140699 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:43.140710 | orchestrator | 2026-01-05 00:28:43.140721 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-05 00:28:43.140733 | orchestrator | Monday 05 January 2026 00:26:19 +0000 (0:00:00.599) 0:01:08.896 ******** 2026-01-05 00:28:43.140744 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:43.140755 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:43.140793 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:43.140817 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:43.140839 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:43.140850 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:43.140861 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:43.140872 | orchestrator | 2026-01-05 00:28:43.140883 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-05 00:28:43.140894 | orchestrator | Monday 05 January 2026 00:26:20 +0000 (0:00:00.267) 0:01:09.163 ******** 2026-01-05 00:28:43.140904 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:43.140915 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:43.140926 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:43.140937 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:43.140947 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:43.140958 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:43.140969 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:43.140979 | orchestrator | 2026-01-05 00:28:43.140990 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-05 00:28:43.141001 | orchestrator | Monday 05 January 2026 00:26:21 +0000 (0:00:01.320) 0:01:10.484 ******** 2026-01-05 00:28:43.141012 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:43.141023 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:43.141033 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:43.141044 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:43.141055 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:43.141065 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:43.141076 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:43.141087 | orchestrator | 2026-01-05 00:28:43.141098 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-05 00:28:43.141108 | orchestrator | Monday 05 January 2026 00:26:23 +0000 (0:00:01.630) 0:01:12.114 ******** 2026-01-05 00:28:43.141119 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:43.141130 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:43.141140 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:43.141151 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:43.141162 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:43.141173 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:43.141183 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:43.141194 | orchestrator | 2026-01-05 00:28:43.141205 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-05 00:28:43.141216 | orchestrator | Monday 05 January 2026 00:26:25 +0000 (0:00:02.395) 0:01:14.510 ******** 2026-01-05 00:28:43.141227 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:43.141238 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:43.141248 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:43.141259 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:43.141269 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:43.141280 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:43.141290 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:43.141301 | orchestrator | 2026-01-05 00:28:43.141312 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-05 00:28:43.141323 | orchestrator | Monday 05 January 2026 00:27:05 +0000 (0:00:40.376) 0:01:54.886 ******** 2026-01-05 00:28:43.141334 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:43.141344 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:43.141356 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:43.141379 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:43.141390 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:43.141401 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:43.141411 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:43.141422 | orchestrator | 2026-01-05 00:28:43.141433 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-05 00:28:43.141443 | orchestrator | Monday 05 January 2026 00:28:25 +0000 (0:01:19.974) 0:03:14.860 ******** 2026-01-05 00:28:43.141454 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:43.141464 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:43.141475 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:43.141486 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:43.141496 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:43.141507 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:43.141518 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:43.141529 | orchestrator | 2026-01-05 00:28:43.141539 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-05 00:28:43.141550 | orchestrator | Monday 05 January 2026 00:28:27 +0000 (0:00:01.796) 0:03:16.657 ******** 2026-01-05 00:28:43.141561 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:43.141572 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:43.141583 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:43.141593 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:43.141604 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:43.141614 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:43.141625 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:43.141636 | orchestrator | 2026-01-05 00:28:43.141647 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-05 00:28:43.141658 | orchestrator | Monday 05 January 2026 00:28:40 +0000 (0:00:13.121) 0:03:29.779 ******** 2026-01-05 00:28:43.141706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-05 00:28:43.141731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-05 00:28:43.141748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-05 00:28:43.141761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-05 00:28:43.141791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-05 00:28:43.141821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-05 00:28:43.141833 | orchestrator | 2026-01-05 00:28:43.141848 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-05 00:28:43.141860 | orchestrator | Monday 05 January 2026 00:28:41 +0000 (0:00:00.463) 0:03:30.242 ******** 2026-01-05 00:28:43.141871 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:28:43.141882 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:43.141893 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:28:43.141904 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:28:43.141915 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:43.141926 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:43.141937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:28:43.141947 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:43.141958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:28:43.141969 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:28:43.141980 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:28:43.141990 | orchestrator | 2026-01-05 00:28:43.142006 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-05 00:28:43.142064 | orchestrator | Monday 05 January 2026 00:28:43 +0000 (0:00:01.754) 0:03:31.996 ******** 2026-01-05 00:28:43.142076 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:28:43.142089 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:28:43.142100 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:28:43.142111 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:28:43.142124 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:28:43.142144 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:28:52.157970 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:28:52.158184 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:28:52.158203 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:28:52.158215 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:28:52.158226 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:28:52.158237 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:28:52.158248 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:28:52.158259 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:28:52.158270 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:28:52.158280 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:28:52.158319 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:28:52.158331 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:28:52.158342 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:28:52.158353 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:28:52.158364 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:28:52.158375 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:52.158388 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:28:52.158399 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:28:52.158409 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:28:52.158420 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:28:52.158431 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:28:52.158441 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:28:52.158452 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:28:52.158463 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:28:52.158474 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:28:52.158486 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:28:52.158500 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:28:52.158513 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:28:52.158527 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:28:52.158539 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:28:52.158551 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:52.158563 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:28:52.158576 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:28:52.158589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:28:52.158602 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:28:52.158632 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:28:52.158646 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:52.158659 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:52.158671 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:28:52.158683 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:28:52.158696 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:28:52.158708 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:28:52.158721 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:28:52.158753 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:28:52.158851 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:28:52.158866 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:28:52.158879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:28:52.158890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:28:52.158900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:28:52.158911 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:28:52.158922 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:28:52.158933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:28:52.158943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:28:52.158954 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:28:52.158965 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:28:52.158975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:28:52.158986 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:28:52.158996 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:28:52.159007 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:28:52.159018 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:28:52.159028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:28:52.159039 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:28:52.159050 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:28:52.159060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:28:52.159071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:28:52.159082 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:28:52.159092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:28:52.159103 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:28:52.159114 | orchestrator | 2026-01-05 00:28:52.159126 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-05 00:28:52.159137 | orchestrator | Monday 05 January 2026 00:28:49 +0000 (0:00:06.933) 0:03:38.929 ******** 2026-01-05 00:28:52.159147 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:52.159158 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:52.159169 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:52.159180 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:52.159190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:52.159201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:52.159212 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:52.159222 | orchestrator | 2026-01-05 00:28:52.159243 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-05 00:28:52.159254 | orchestrator | Monday 05 January 2026 00:28:51 +0000 (0:00:01.680) 0:03:40.610 ******** 2026-01-05 00:28:52.159264 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:52.159281 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:52.159292 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:52.159303 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:52.159314 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:28:52.159325 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:28:52.159336 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:52.159346 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:28:52.159357 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:28:52.159368 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:28:52.159387 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:29:08.012996 | orchestrator | 2026-01-05 00:29:08.013131 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-05 00:29:08.013146 | orchestrator | Monday 05 January 2026 00:28:52 +0000 (0:00:00.512) 0:03:41.123 ******** 2026-01-05 00:29:08.013154 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:29:08.013163 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:08.013171 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:29:08.013185 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:08.013202 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:29:08.013215 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:08.013223 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:29:08.013229 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:08.013236 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:29:08.013242 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:29:08.013249 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:29:08.013256 | orchestrator | 2026-01-05 00:29:08.013262 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-05 00:29:08.013276 | orchestrator | Monday 05 January 2026 00:28:54 +0000 (0:00:02.679) 0:03:43.803 ******** 2026-01-05 00:29:08.013284 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:29:08.013290 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:08.013296 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:29:08.013303 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:08.013310 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:29:08.013316 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:08.013322 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:29:08.013328 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:08.013335 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:29:08.013367 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:29:08.013374 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:29:08.013380 | orchestrator | 2026-01-05 00:29:08.013387 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-05 00:29:08.013393 | orchestrator | Monday 05 January 2026 00:28:55 +0000 (0:00:00.645) 0:03:44.448 ******** 2026-01-05 00:29:08.013399 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:08.013406 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:08.013412 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:08.013419 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:08.013425 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:08.013431 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:08.013438 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:08.013444 | orchestrator | 2026-01-05 00:29:08.013450 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-05 00:29:08.013456 | orchestrator | Monday 05 January 2026 00:28:55 +0000 (0:00:00.336) 0:03:44.784 ******** 2026-01-05 00:29:08.013462 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:08.013470 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:08.013476 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:08.013482 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:08.013489 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:08.013495 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:08.013501 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:08.013507 | orchestrator | 2026-01-05 00:29:08.013513 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-05 00:29:08.013520 | orchestrator | Monday 05 January 2026 00:29:01 +0000 (0:00:05.793) 0:03:50.578 ******** 2026-01-05 00:29:08.013527 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-05 00:29:08.013534 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-05 00:29:08.013541 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:08.013547 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:08.013554 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-05 00:29:08.013561 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-05 00:29:08.013567 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:08.013574 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:08.013580 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-05 00:29:08.013586 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-05 00:29:08.013593 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:08.013599 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:08.013605 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-05 00:29:08.013611 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:08.013618 | orchestrator | 2026-01-05 00:29:08.013624 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-05 00:29:08.013631 | orchestrator | Monday 05 January 2026 00:29:01 +0000 (0:00:00.347) 0:03:50.926 ******** 2026-01-05 00:29:08.013637 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-05 00:29:08.013644 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-05 00:29:08.013650 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-05 00:29:08.013672 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-05 00:29:08.013678 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-05 00:29:08.013685 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-05 00:29:08.013691 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-05 00:29:08.013698 | orchestrator | 2026-01-05 00:29:08.013704 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-05 00:29:08.013710 | orchestrator | Monday 05 January 2026 00:29:03 +0000 (0:00:01.096) 0:03:52.023 ******** 2026-01-05 00:29:08.013739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:08.013755 | orchestrator | 2026-01-05 00:29:08.013761 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-05 00:29:08.013769 | orchestrator | Monday 05 January 2026 00:29:03 +0000 (0:00:00.609) 0:03:52.633 ******** 2026-01-05 00:29:08.013773 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:08.013777 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:08.013780 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:08.013820 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:08.013824 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:08.013828 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:08.013832 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:08.013835 | orchestrator | 2026-01-05 00:29:08.013839 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-05 00:29:08.013843 | orchestrator | Monday 05 January 2026 00:29:04 +0000 (0:00:01.315) 0:03:53.949 ******** 2026-01-05 00:29:08.013847 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:08.013850 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:08.013854 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:08.013858 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:08.013861 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:08.013865 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:08.013869 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:08.013872 | orchestrator | 2026-01-05 00:29:08.013876 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-05 00:29:08.013880 | orchestrator | Monday 05 January 2026 00:29:05 +0000 (0:00:00.727) 0:03:54.677 ******** 2026-01-05 00:29:08.013884 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:08.013888 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:08.013891 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:08.013895 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:08.013899 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:08.013902 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:08.013906 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:08.013910 | orchestrator | 2026-01-05 00:29:08.013913 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-05 00:29:08.013917 | orchestrator | Monday 05 January 2026 00:29:06 +0000 (0:00:00.626) 0:03:55.303 ******** 2026-01-05 00:29:08.013921 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:08.013925 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:08.013928 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:08.013932 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:08.013936 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:08.013939 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:08.013943 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:08.013947 | orchestrator | 2026-01-05 00:29:08.013951 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-05 00:29:08.013954 | orchestrator | Monday 05 January 2026 00:29:06 +0000 (0:00:00.637) 0:03:55.941 ******** 2026-01-05 00:29:08.013961 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571502.5005732, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:08.013971 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571529.905068, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:08.013979 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571519.4825203, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:08.014002 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571520.9238677, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.232938 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571583.6699817, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233070 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571521.539456, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233088 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571521.882949, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233100 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233112 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233166 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233179 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233220 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233233 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233244 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:29:13.233256 | orchestrator | 2026-01-05 00:29:13.233270 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-05 00:29:13.233283 | orchestrator | Monday 05 January 2026 00:29:08 +0000 (0:00:01.037) 0:03:56.979 ******** 2026-01-05 00:29:13.233294 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:13.233306 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:13.233316 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:13.233327 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:13.233338 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:13.233351 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:13.233371 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:13.233389 | orchestrator | 2026-01-05 00:29:13.233406 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-05 00:29:13.233437 | orchestrator | Monday 05 January 2026 00:29:09 +0000 (0:00:01.226) 0:03:58.206 ******** 2026-01-05 00:29:13.233456 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:13.233474 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:13.233492 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:13.233513 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:13.233534 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:13.233553 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:13.233571 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:13.233588 | orchestrator | 2026-01-05 00:29:13.233601 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-05 00:29:13.233614 | orchestrator | Monday 05 January 2026 00:29:10 +0000 (0:00:01.187) 0:03:59.393 ******** 2026-01-05 00:29:13.233627 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:13.233638 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:13.233648 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:13.233659 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:13.233670 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:13.233681 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:13.233691 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:13.233702 | orchestrator | 2026-01-05 00:29:13.233719 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-05 00:29:13.233730 | orchestrator | Monday 05 January 2026 00:29:11 +0000 (0:00:01.213) 0:04:00.606 ******** 2026-01-05 00:29:13.233741 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:13.233752 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:13.233763 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:13.233773 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:13.233784 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:13.233829 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:13.233840 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:13.233850 | orchestrator | 2026-01-05 00:29:13.233861 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-05 00:29:13.233872 | orchestrator | Monday 05 January 2026 00:29:11 +0000 (0:00:00.333) 0:04:00.940 ******** 2026-01-05 00:29:13.233882 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.233894 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.233905 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.233916 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.233927 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.233937 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.233948 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.233958 | orchestrator | 2026-01-05 00:29:13.233969 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-05 00:29:13.233980 | orchestrator | Monday 05 January 2026 00:29:12 +0000 (0:00:00.815) 0:04:01.756 ******** 2026-01-05 00:29:13.233993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:13.234006 | orchestrator | 2026-01-05 00:29:13.234096 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-05 00:29:13.234120 | orchestrator | Monday 05 January 2026 00:29:13 +0000 (0:00:00.445) 0:04:02.201 ******** 2026-01-05 00:30:35.397211 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:35.397330 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:35.397343 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:35.397351 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:35.397358 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:35.397366 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:35.397373 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:35.397381 | orchestrator | 2026-01-05 00:30:35.397390 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-05 00:30:35.397420 | orchestrator | Monday 05 January 2026 00:29:22 +0000 (0:00:09.245) 0:04:11.447 ******** 2026-01-05 00:30:35.397429 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:35.397436 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:35.397443 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:35.397450 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:35.397457 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:35.397464 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:35.397471 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:35.397479 | orchestrator | 2026-01-05 00:30:35.397486 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-05 00:30:35.397493 | orchestrator | Monday 05 January 2026 00:29:23 +0000 (0:00:01.454) 0:04:12.902 ******** 2026-01-05 00:30:35.397500 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:35.397508 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:35.397515 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:35.397522 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:35.397529 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:35.397536 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:35.397542 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:35.397550 | orchestrator | 2026-01-05 00:30:35.397557 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-05 00:30:35.397564 | orchestrator | Monday 05 January 2026 00:29:25 +0000 (0:00:01.241) 0:04:14.144 ******** 2026-01-05 00:30:35.397571 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:35.397578 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:35.397585 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:35.397592 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:35.397599 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:35.397606 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:35.397613 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:35.397620 | orchestrator | 2026-01-05 00:30:35.397627 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-05 00:30:35.397635 | orchestrator | Monday 05 January 2026 00:29:25 +0000 (0:00:00.304) 0:04:14.448 ******** 2026-01-05 00:30:35.397643 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:35.397650 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:35.397657 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:35.397664 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:35.397671 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:35.397678 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:35.397685 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:35.397692 | orchestrator | 2026-01-05 00:30:35.397699 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-05 00:30:35.397706 | orchestrator | Monday 05 January 2026 00:29:25 +0000 (0:00:00.359) 0:04:14.807 ******** 2026-01-05 00:30:35.397713 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:35.397720 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:35.397727 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:35.397734 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:35.397741 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:35.397750 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:35.397758 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:35.397766 | orchestrator | 2026-01-05 00:30:35.397776 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-05 00:30:35.397784 | orchestrator | Monday 05 January 2026 00:29:26 +0000 (0:00:00.299) 0:04:15.106 ******** 2026-01-05 00:30:35.397792 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:35.397801 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:35.397809 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:35.397818 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:35.397826 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:35.397834 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:35.397859 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:35.397868 | orchestrator | 2026-01-05 00:30:35.397876 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-05 00:30:35.397890 | orchestrator | Monday 05 January 2026 00:29:32 +0000 (0:00:06.203) 0:04:21.309 ******** 2026-01-05 00:30:35.397901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:35.397914 | orchestrator | 2026-01-05 00:30:35.397922 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-05 00:30:35.397931 | orchestrator | Monday 05 January 2026 00:29:32 +0000 (0:00:00.425) 0:04:21.735 ******** 2026-01-05 00:30:35.397938 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-05 00:30:35.397946 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-05 00:30:35.397953 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-05 00:30:35.397960 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-05 00:30:35.397968 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:35.397975 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-05 00:30:35.397982 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-05 00:30:35.397989 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:35.397996 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-05 00:30:35.398003 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:35.398010 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-05 00:30:35.398070 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-05 00:30:35.398077 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-05 00:30:35.398084 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:35.398092 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-05 00:30:35.398099 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:35.398120 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-05 00:30:35.398128 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:35.398135 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-05 00:30:35.398159 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-05 00:30:35.398167 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:35.398174 | orchestrator | 2026-01-05 00:30:35.398182 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-05 00:30:35.398189 | orchestrator | Monday 05 January 2026 00:29:33 +0000 (0:00:00.368) 0:04:22.104 ******** 2026-01-05 00:30:35.398197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:35.398204 | orchestrator | 2026-01-05 00:30:35.398211 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-05 00:30:35.398219 | orchestrator | Monday 05 January 2026 00:29:33 +0000 (0:00:00.449) 0:04:22.553 ******** 2026-01-05 00:30:35.398226 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-05 00:30:35.398233 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-05 00:30:35.398241 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:35.398248 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-05 00:30:35.398256 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:35.398263 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-05 00:30:35.398270 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:35.398277 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-05 00:30:35.398284 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:35.398292 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:35.398299 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-05 00:30:35.398312 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:35.398319 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-05 00:30:35.398326 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:35.398333 | orchestrator | 2026-01-05 00:30:35.398340 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-05 00:30:35.398348 | orchestrator | Monday 05 January 2026 00:29:33 +0000 (0:00:00.326) 0:04:22.879 ******** 2026-01-05 00:30:35.398355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:35.398362 | orchestrator | 2026-01-05 00:30:35.398370 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-05 00:30:35.398377 | orchestrator | Monday 05 January 2026 00:29:34 +0000 (0:00:00.469) 0:04:23.349 ******** 2026-01-05 00:30:35.398384 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:35.398391 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:35.398398 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:35.398406 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:35.398413 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:35.398420 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:35.398427 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:35.398434 | orchestrator | 2026-01-05 00:30:35.398441 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-05 00:30:35.398449 | orchestrator | Monday 05 January 2026 00:30:09 +0000 (0:00:35.336) 0:04:58.686 ******** 2026-01-05 00:30:35.398456 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:35.398463 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:35.398470 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:35.398477 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:35.398485 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:35.398496 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:35.398503 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:35.398510 | orchestrator | 2026-01-05 00:30:35.398517 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-05 00:30:35.398525 | orchestrator | Monday 05 January 2026 00:30:18 +0000 (0:00:08.981) 0:05:07.667 ******** 2026-01-05 00:30:35.398532 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:35.398539 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:35.398546 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:35.398553 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:35.398560 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:35.398568 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:35.398575 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:35.398582 | orchestrator | 2026-01-05 00:30:35.398589 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-05 00:30:35.398597 | orchestrator | Monday 05 January 2026 00:30:27 +0000 (0:00:08.350) 0:05:16.017 ******** 2026-01-05 00:30:35.398604 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:35.398611 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:35.398618 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:35.398625 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:35.398633 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:35.398640 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:35.398647 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:35.398654 | orchestrator | 2026-01-05 00:30:35.398661 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-05 00:30:35.398669 | orchestrator | Monday 05 January 2026 00:30:28 +0000 (0:00:01.828) 0:05:17.845 ******** 2026-01-05 00:30:35.398676 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:35.398683 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:35.398690 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:35.398698 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:35.398710 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:35.398717 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:35.398724 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:35.398731 | orchestrator | 2026-01-05 00:30:35.398743 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-05 00:30:46.938735 | orchestrator | Monday 05 January 2026 00:30:35 +0000 (0:00:06.515) 0:05:24.361 ******** 2026-01-05 00:30:46.938965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:46.938990 | orchestrator | 2026-01-05 00:30:46.939003 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-05 00:30:46.939015 | orchestrator | Monday 05 January 2026 00:30:35 +0000 (0:00:00.438) 0:05:24.799 ******** 2026-01-05 00:30:46.939028 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:46.939040 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:46.939051 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:46.939062 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:46.939073 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:46.939083 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:46.939095 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:46.939106 | orchestrator | 2026-01-05 00:30:46.939117 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-05 00:30:46.939127 | orchestrator | Monday 05 January 2026 00:30:36 +0000 (0:00:00.773) 0:05:25.573 ******** 2026-01-05 00:30:46.939138 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:46.939151 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:46.939162 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:46.939173 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:46.939183 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:46.939194 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:46.939205 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:46.939216 | orchestrator | 2026-01-05 00:30:46.939227 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-05 00:30:46.939241 | orchestrator | Monday 05 January 2026 00:30:38 +0000 (0:00:01.746) 0:05:27.319 ******** 2026-01-05 00:30:46.939253 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:46.939266 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:46.939278 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:46.939292 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:46.939304 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:46.939317 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:46.939329 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:46.939341 | orchestrator | 2026-01-05 00:30:46.939354 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-05 00:30:46.939367 | orchestrator | Monday 05 January 2026 00:30:39 +0000 (0:00:00.793) 0:05:28.113 ******** 2026-01-05 00:30:46.939381 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:46.939393 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:46.939405 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:46.939417 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:46.939430 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:46.939443 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:46.939456 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:46.939484 | orchestrator | 2026-01-05 00:30:46.939496 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-05 00:30:46.939509 | orchestrator | Monday 05 January 2026 00:30:39 +0000 (0:00:00.320) 0:05:28.434 ******** 2026-01-05 00:30:46.939522 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:46.939534 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:46.939546 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:46.939559 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:46.939571 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:46.939614 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:46.939626 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:46.939636 | orchestrator | 2026-01-05 00:30:46.939648 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-05 00:30:46.939658 | orchestrator | Monday 05 January 2026 00:30:39 +0000 (0:00:00.399) 0:05:28.833 ******** 2026-01-05 00:30:46.939669 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:46.939680 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:46.939691 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:46.939702 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:46.939713 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:46.939741 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:46.939753 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:46.939763 | orchestrator | 2026-01-05 00:30:46.939774 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-05 00:30:46.939785 | orchestrator | Monday 05 January 2026 00:30:40 +0000 (0:00:00.308) 0:05:29.142 ******** 2026-01-05 00:30:46.939796 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:46.939807 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:46.939818 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:46.939828 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:46.939839 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:46.939911 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:46.939924 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:46.939935 | orchestrator | 2026-01-05 00:30:46.939945 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-05 00:30:46.939957 | orchestrator | Monday 05 January 2026 00:30:40 +0000 (0:00:00.304) 0:05:29.446 ******** 2026-01-05 00:30:46.939967 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:46.939978 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:46.939989 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:46.939999 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:46.940010 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:46.940020 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:46.940031 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:46.940042 | orchestrator | 2026-01-05 00:30:46.940052 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-05 00:30:46.940063 | orchestrator | Monday 05 January 2026 00:30:40 +0000 (0:00:00.324) 0:05:29.771 ******** 2026-01-05 00:30:46.940074 | orchestrator | ok: [testbed-manager] =>  2026-01-05 00:30:46.940084 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:46.940095 | orchestrator | ok: [testbed-node-3] =>  2026-01-05 00:30:46.940105 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:46.940116 | orchestrator | ok: [testbed-node-4] =>  2026-01-05 00:30:46.940127 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:46.940138 | orchestrator | ok: [testbed-node-5] =>  2026-01-05 00:30:46.940148 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:46.940180 | orchestrator | ok: [testbed-node-0] =>  2026-01-05 00:30:46.940191 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:46.940202 | orchestrator | ok: [testbed-node-1] =>  2026-01-05 00:30:46.940213 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:46.940224 | orchestrator | ok: [testbed-node-2] =>  2026-01-05 00:30:46.940235 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:46.940245 | orchestrator | 2026-01-05 00:30:46.940256 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-05 00:30:46.940267 | orchestrator | Monday 05 January 2026 00:30:41 +0000 (0:00:00.279) 0:05:30.051 ******** 2026-01-05 00:30:46.940278 | orchestrator | ok: [testbed-manager] =>  2026-01-05 00:30:46.940289 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:46.940300 | orchestrator | ok: [testbed-node-3] =>  2026-01-05 00:30:46.940311 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:46.940321 | orchestrator | ok: [testbed-node-4] =>  2026-01-05 00:30:46.940332 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:46.940343 | orchestrator | ok: [testbed-node-5] =>  2026-01-05 00:30:46.940364 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:46.940375 | orchestrator | ok: [testbed-node-0] =>  2026-01-05 00:30:46.940386 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:46.940397 | orchestrator | ok: [testbed-node-1] =>  2026-01-05 00:30:46.940407 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:46.940418 | orchestrator | ok: [testbed-node-2] =>  2026-01-05 00:30:46.940429 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:46.940440 | orchestrator | 2026-01-05 00:30:46.940451 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-05 00:30:46.940462 | orchestrator | Monday 05 January 2026 00:30:41 +0000 (0:00:00.335) 0:05:30.386 ******** 2026-01-05 00:30:46.940472 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:46.940485 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:46.940503 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:46.940520 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:46.940538 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:46.940557 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:46.940573 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:46.940584 | orchestrator | 2026-01-05 00:30:46.940595 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-05 00:30:46.940610 | orchestrator | Monday 05 January 2026 00:30:41 +0000 (0:00:00.301) 0:05:30.688 ******** 2026-01-05 00:30:46.940629 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:46.940649 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:46.940661 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:46.940671 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:46.940682 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:46.940693 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:46.940703 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:46.940720 | orchestrator | 2026-01-05 00:30:46.940739 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-05 00:30:46.940758 | orchestrator | Monday 05 January 2026 00:30:42 +0000 (0:00:00.333) 0:05:31.021 ******** 2026-01-05 00:30:46.940771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:46.940785 | orchestrator | 2026-01-05 00:30:46.940796 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-05 00:30:46.940807 | orchestrator | Monday 05 January 2026 00:30:42 +0000 (0:00:00.484) 0:05:31.506 ******** 2026-01-05 00:30:46.940819 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:46.940830 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:46.940840 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:46.940887 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:46.940899 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:46.940909 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:46.940920 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:46.940931 | orchestrator | 2026-01-05 00:30:46.940942 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-05 00:30:46.940960 | orchestrator | Monday 05 January 2026 00:30:43 +0000 (0:00:01.022) 0:05:32.528 ******** 2026-01-05 00:30:46.940971 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:46.940981 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:46.940992 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:46.941002 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:46.941013 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:46.941023 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:46.941034 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:46.941044 | orchestrator | 2026-01-05 00:30:46.941055 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-05 00:30:46.941066 | orchestrator | Monday 05 January 2026 00:30:46 +0000 (0:00:02.979) 0:05:35.508 ******** 2026-01-05 00:30:46.941086 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-05 00:30:46.941097 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-05 00:30:46.941108 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-05 00:30:46.941118 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-05 00:30:46.941129 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-05 00:30:46.941140 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-05 00:30:46.941150 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:46.941161 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-05 00:30:46.941171 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-05 00:30:46.941182 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-05 00:30:46.941193 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:46.941203 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-05 00:30:46.941214 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-05 00:30:46.941224 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-05 00:30:46.941235 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:46.941246 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-05 00:30:46.941265 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-05 00:31:53.112320 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:53.112440 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-05 00:31:53.112455 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-05 00:31:53.112465 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-05 00:31:53.112474 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-05 00:31:53.112482 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:53.112491 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:53.112500 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-05 00:31:53.112509 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-05 00:31:53.112518 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-05 00:31:53.112526 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:53.112536 | orchestrator | 2026-01-05 00:31:53.112546 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-05 00:31:53.112556 | orchestrator | Monday 05 January 2026 00:30:47 +0000 (0:00:00.641) 0:05:36.149 ******** 2026-01-05 00:31:53.112565 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.112574 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.112582 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.112591 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.112600 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.112608 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.112617 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.112625 | orchestrator | 2026-01-05 00:31:53.112634 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-05 00:31:53.112643 | orchestrator | Monday 05 January 2026 00:30:53 +0000 (0:00:06.829) 0:05:42.979 ******** 2026-01-05 00:31:53.112652 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.112661 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.112669 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.112678 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.112686 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.112695 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.112704 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.112712 | orchestrator | 2026-01-05 00:31:53.112721 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-05 00:31:53.112730 | orchestrator | Monday 05 January 2026 00:30:55 +0000 (0:00:01.083) 0:05:44.062 ******** 2026-01-05 00:31:53.112739 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.112770 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.112780 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.112788 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.112797 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.112806 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.112815 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.112824 | orchestrator | 2026-01-05 00:31:53.112834 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-05 00:31:53.112845 | orchestrator | Monday 05 January 2026 00:31:04 +0000 (0:00:08.930) 0:05:52.992 ******** 2026-01-05 00:31:53.112856 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:53.112867 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.112902 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.112912 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.112923 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.112932 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.112942 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.112952 | orchestrator | 2026-01-05 00:31:53.112962 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-05 00:31:53.112971 | orchestrator | Monday 05 January 2026 00:31:07 +0000 (0:00:03.541) 0:05:56.534 ******** 2026-01-05 00:31:53.112982 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.112992 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.113002 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.113013 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.113022 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.113032 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.113042 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.113052 | orchestrator | 2026-01-05 00:31:53.113063 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-05 00:31:53.113073 | orchestrator | Monday 05 January 2026 00:31:09 +0000 (0:00:01.469) 0:05:58.004 ******** 2026-01-05 00:31:53.113083 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.113093 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.113103 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.113113 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.113124 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.113134 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.113144 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.113154 | orchestrator | 2026-01-05 00:31:53.113164 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-05 00:31:53.113175 | orchestrator | Monday 05 January 2026 00:31:10 +0000 (0:00:01.529) 0:05:59.533 ******** 2026-01-05 00:31:53.113185 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:53.113195 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:53.113206 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:53.113216 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:53.113224 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:53.113232 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:53.113241 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:53.113250 | orchestrator | 2026-01-05 00:31:53.113258 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-05 00:31:53.113267 | orchestrator | Monday 05 January 2026 00:31:11 +0000 (0:00:00.655) 0:06:00.189 ******** 2026-01-05 00:31:53.113276 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.113284 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.113293 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.113301 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.113309 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.113318 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.113327 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.113335 | orchestrator | 2026-01-05 00:31:53.113344 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-05 00:31:53.113376 | orchestrator | Monday 05 January 2026 00:31:21 +0000 (0:00:10.483) 0:06:10.673 ******** 2026-01-05 00:31:53.113386 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:53.113395 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.113403 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.113412 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.113420 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.113429 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.113438 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.113446 | orchestrator | 2026-01-05 00:31:53.113455 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-05 00:31:53.113464 | orchestrator | Monday 05 January 2026 00:31:22 +0000 (0:00:00.923) 0:06:11.597 ******** 2026-01-05 00:31:53.113472 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.113481 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.113490 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.113498 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.113507 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.113515 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.113524 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.113532 | orchestrator | 2026-01-05 00:31:53.113541 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-05 00:31:53.113550 | orchestrator | Monday 05 January 2026 00:31:33 +0000 (0:00:10.666) 0:06:22.264 ******** 2026-01-05 00:31:53.113558 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.113567 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.113576 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.113584 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.113593 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.113602 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.113610 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.113619 | orchestrator | 2026-01-05 00:31:53.113627 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-05 00:31:53.113636 | orchestrator | Monday 05 January 2026 00:31:45 +0000 (0:00:12.216) 0:06:34.480 ******** 2026-01-05 00:31:53.113645 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-05 00:31:53.113654 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-05 00:31:53.113663 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-05 00:31:53.113672 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-05 00:31:53.113680 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-05 00:31:53.113689 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-05 00:31:53.113697 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-05 00:31:53.113706 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-05 00:31:53.113715 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-05 00:31:53.113723 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-05 00:31:53.113732 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-05 00:31:53.113740 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-05 00:31:53.113749 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-05 00:31:53.113758 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-05 00:31:53.113766 | orchestrator | 2026-01-05 00:31:53.113775 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-05 00:31:53.113783 | orchestrator | Monday 05 January 2026 00:31:46 +0000 (0:00:01.253) 0:06:35.733 ******** 2026-01-05 00:31:53.113792 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:53.113801 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:53.113860 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:53.113870 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:53.113894 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:53.113903 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:53.113918 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:53.113927 | orchestrator | 2026-01-05 00:31:53.113936 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-05 00:31:53.113944 | orchestrator | Monday 05 January 2026 00:31:47 +0000 (0:00:00.563) 0:06:36.296 ******** 2026-01-05 00:31:53.113953 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:53.113967 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:53.113976 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:53.113985 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:53.113993 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:53.114002 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:53.114010 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:53.114081 | orchestrator | 2026-01-05 00:31:53.114090 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-05 00:31:53.114099 | orchestrator | Monday 05 January 2026 00:31:52 +0000 (0:00:04.753) 0:06:41.050 ******** 2026-01-05 00:31:53.114108 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:53.114117 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:53.114126 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:53.114164 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:53.114174 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:53.114183 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:53.114192 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:53.114200 | orchestrator | 2026-01-05 00:31:53.114210 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-05 00:31:53.114219 | orchestrator | Monday 05 January 2026 00:31:52 +0000 (0:00:00.522) 0:06:41.572 ******** 2026-01-05 00:31:53.114227 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-05 00:31:53.114236 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-05 00:31:53.114245 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:53.114253 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-05 00:31:53.114262 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-05 00:31:53.114271 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:53.114279 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-05 00:31:53.114288 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-05 00:31:53.114297 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:53.114313 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-05 00:32:14.103046 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-05 00:32:14.103154 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:14.103165 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-05 00:32:14.103172 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-05 00:32:14.103180 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:14.103187 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-05 00:32:14.103195 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-05 00:32:14.103201 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:14.103208 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-05 00:32:14.103215 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-05 00:32:14.103222 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:14.103228 | orchestrator | 2026-01-05 00:32:14.103237 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-05 00:32:14.103245 | orchestrator | Monday 05 January 2026 00:31:53 +0000 (0:00:00.792) 0:06:42.365 ******** 2026-01-05 00:32:14.103252 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:14.103259 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:14.103266 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:14.103272 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:14.103302 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:14.103308 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:14.103315 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:14.103321 | orchestrator | 2026-01-05 00:32:14.103327 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-05 00:32:14.103333 | orchestrator | Monday 05 January 2026 00:31:53 +0000 (0:00:00.524) 0:06:42.890 ******** 2026-01-05 00:32:14.103340 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:14.103346 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:14.103352 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:14.103357 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:14.103363 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:14.103369 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:14.103375 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:14.103381 | orchestrator | 2026-01-05 00:32:14.103388 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-05 00:32:14.103394 | orchestrator | Monday 05 January 2026 00:31:54 +0000 (0:00:00.571) 0:06:43.462 ******** 2026-01-05 00:32:14.103400 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:14.103405 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:14.103411 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:14.103417 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:14.103423 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:14.103429 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:14.103435 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:14.103441 | orchestrator | 2026-01-05 00:32:14.103447 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-05 00:32:14.103452 | orchestrator | Monday 05 January 2026 00:31:55 +0000 (0:00:00.550) 0:06:44.012 ******** 2026-01-05 00:32:14.103458 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.103465 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:14.103471 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:14.103478 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:14.103484 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:14.103491 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:14.103497 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:14.103502 | orchestrator | 2026-01-05 00:32:14.103509 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-05 00:32:14.103516 | orchestrator | Monday 05 January 2026 00:31:57 +0000 (0:00:02.029) 0:06:46.042 ******** 2026-01-05 00:32:14.103525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:14.103534 | orchestrator | 2026-01-05 00:32:14.103541 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-05 00:32:14.103548 | orchestrator | Monday 05 January 2026 00:31:57 +0000 (0:00:00.885) 0:06:46.927 ******** 2026-01-05 00:32:14.103555 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.103561 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:14.103568 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:14.103575 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:14.103582 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:14.103588 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:14.103595 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:14.103602 | orchestrator | 2026-01-05 00:32:14.103609 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-05 00:32:14.103616 | orchestrator | Monday 05 January 2026 00:31:58 +0000 (0:00:00.876) 0:06:47.804 ******** 2026-01-05 00:32:14.103623 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.103630 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:14.103636 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:14.103641 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:14.103645 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:14.103656 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:14.103661 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:14.103665 | orchestrator | 2026-01-05 00:32:14.103670 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-05 00:32:14.103674 | orchestrator | Monday 05 January 2026 00:31:59 +0000 (0:00:00.962) 0:06:48.767 ******** 2026-01-05 00:32:14.103679 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.103683 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:14.103687 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:14.103692 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:14.103696 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:14.103700 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:14.103704 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:14.103709 | orchestrator | 2026-01-05 00:32:14.103713 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-05 00:32:14.103731 | orchestrator | Monday 05 January 2026 00:32:01 +0000 (0:00:01.576) 0:06:50.343 ******** 2026-01-05 00:32:14.103736 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:14.103741 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:14.103745 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:14.103750 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:14.103754 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:14.103758 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:14.103763 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:14.103767 | orchestrator | 2026-01-05 00:32:14.103772 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-05 00:32:14.103776 | orchestrator | Monday 05 January 2026 00:32:02 +0000 (0:00:01.504) 0:06:51.848 ******** 2026-01-05 00:32:14.103781 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.103785 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:14.103789 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:14.103794 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:14.103798 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:14.103802 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:14.103806 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:14.103811 | orchestrator | 2026-01-05 00:32:14.103815 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-05 00:32:14.103819 | orchestrator | Monday 05 January 2026 00:32:04 +0000 (0:00:01.396) 0:06:53.244 ******** 2026-01-05 00:32:14.103824 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:14.103828 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:14.103832 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:14.103837 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:14.103841 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:14.103845 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:14.103849 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:14.103854 | orchestrator | 2026-01-05 00:32:14.103858 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-05 00:32:14.103862 | orchestrator | Monday 05 January 2026 00:32:05 +0000 (0:00:01.497) 0:06:54.741 ******** 2026-01-05 00:32:14.103867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:14.103872 | orchestrator | 2026-01-05 00:32:14.103876 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-05 00:32:14.103880 | orchestrator | Monday 05 January 2026 00:32:06 +0000 (0:00:01.036) 0:06:55.778 ******** 2026-01-05 00:32:14.103885 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.103937 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:14.103943 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:14.103949 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:14.103957 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:14.103962 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:14.103974 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:14.103978 | orchestrator | 2026-01-05 00:32:14.103982 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-05 00:32:14.103986 | orchestrator | Monday 05 January 2026 00:32:08 +0000 (0:00:01.380) 0:06:57.158 ******** 2026-01-05 00:32:14.103990 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.103993 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:14.103997 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:14.104001 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:14.104005 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:14.104008 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:14.104012 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:14.104016 | orchestrator | 2026-01-05 00:32:14.104019 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-05 00:32:14.104023 | orchestrator | Monday 05 January 2026 00:32:09 +0000 (0:00:01.123) 0:06:58.281 ******** 2026-01-05 00:32:14.104027 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:14.104031 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:14.104034 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:14.104038 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:14.104042 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:14.104045 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:14.104049 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.104053 | orchestrator | 2026-01-05 00:32:14.104071 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-05 00:32:14.104075 | orchestrator | Monday 05 January 2026 00:32:11 +0000 (0:00:01.708) 0:06:59.989 ******** 2026-01-05 00:32:14.104079 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:14.104083 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:14.104087 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:14.104091 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:14.104094 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:14.104098 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:14.104102 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:14.104105 | orchestrator | 2026-01-05 00:32:14.104109 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-05 00:32:14.104113 | orchestrator | Monday 05 January 2026 00:32:12 +0000 (0:00:01.890) 0:07:01.880 ******** 2026-01-05 00:32:14.104117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:14.104121 | orchestrator | 2026-01-05 00:32:14.104124 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:14.104128 | orchestrator | Monday 05 January 2026 00:32:13 +0000 (0:00:00.888) 0:07:02.769 ******** 2026-01-05 00:32:14.104132 | orchestrator | 2026-01-05 00:32:14.104136 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:14.104139 | orchestrator | Monday 05 January 2026 00:32:13 +0000 (0:00:00.040) 0:07:02.810 ******** 2026-01-05 00:32:14.104143 | orchestrator | 2026-01-05 00:32:14.104147 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:14.104150 | orchestrator | Monday 05 January 2026 00:32:13 +0000 (0:00:00.047) 0:07:02.857 ******** 2026-01-05 00:32:14.104154 | orchestrator | 2026-01-05 00:32:14.104158 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:14.104166 | orchestrator | Monday 05 January 2026 00:32:13 +0000 (0:00:00.040) 0:07:02.898 ******** 2026-01-05 00:32:40.940599 | orchestrator | 2026-01-05 00:32:40.940709 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:40.940722 | orchestrator | Monday 05 January 2026 00:32:13 +0000 (0:00:00.040) 0:07:02.938 ******** 2026-01-05 00:32:40.940732 | orchestrator | 2026-01-05 00:32:40.940740 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:40.940748 | orchestrator | Monday 05 January 2026 00:32:14 +0000 (0:00:00.047) 0:07:02.986 ******** 2026-01-05 00:32:40.940779 | orchestrator | 2026-01-05 00:32:40.940788 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:40.940796 | orchestrator | Monday 05 January 2026 00:32:14 +0000 (0:00:00.040) 0:07:03.026 ******** 2026-01-05 00:32:40.940804 | orchestrator | 2026-01-05 00:32:40.940812 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:32:40.940820 | orchestrator | Monday 05 January 2026 00:32:14 +0000 (0:00:00.040) 0:07:03.066 ******** 2026-01-05 00:32:40.940828 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:40.940837 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:40.940845 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:40.940853 | orchestrator | 2026-01-05 00:32:40.940861 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-05 00:32:40.940869 | orchestrator | Monday 05 January 2026 00:32:15 +0000 (0:00:01.261) 0:07:04.327 ******** 2026-01-05 00:32:40.940877 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:40.940885 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:40.940937 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:40.940945 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:40.940953 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:40.940961 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:40.940969 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:40.940978 | orchestrator | 2026-01-05 00:32:40.940986 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-05 00:32:40.940994 | orchestrator | Monday 05 January 2026 00:32:17 +0000 (0:00:01.793) 0:07:06.121 ******** 2026-01-05 00:32:40.941002 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:40.941010 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:40.941018 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:40.941026 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:40.941033 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:40.941041 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:40.941049 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:40.941057 | orchestrator | 2026-01-05 00:32:40.941065 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-05 00:32:40.941073 | orchestrator | Monday 05 January 2026 00:32:18 +0000 (0:00:01.194) 0:07:07.316 ******** 2026-01-05 00:32:40.941081 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:40.941089 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:40.941096 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:40.941104 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:40.941112 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:40.941120 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:40.941128 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:40.941137 | orchestrator | 2026-01-05 00:32:40.941147 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-05 00:32:40.941156 | orchestrator | Monday 05 January 2026 00:32:20 +0000 (0:00:02.284) 0:07:09.600 ******** 2026-01-05 00:32:40.941166 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:40.941175 | orchestrator | 2026-01-05 00:32:40.941184 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-05 00:32:40.941194 | orchestrator | Monday 05 January 2026 00:32:20 +0000 (0:00:00.096) 0:07:09.696 ******** 2026-01-05 00:32:40.941203 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:40.941212 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:40.941221 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:40.941230 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:40.941240 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:40.941250 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:40.941259 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:40.941268 | orchestrator | 2026-01-05 00:32:40.941293 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-05 00:32:40.941304 | orchestrator | Monday 05 January 2026 00:32:21 +0000 (0:00:01.096) 0:07:10.792 ******** 2026-01-05 00:32:40.941320 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:40.941329 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:40.941338 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:40.941347 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:40.941356 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:40.941365 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:40.941375 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:40.941384 | orchestrator | 2026-01-05 00:32:40.941393 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-05 00:32:40.941403 | orchestrator | Monday 05 January 2026 00:32:22 +0000 (0:00:00.577) 0:07:11.370 ******** 2026-01-05 00:32:40.941414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:40.941425 | orchestrator | 2026-01-05 00:32:40.941436 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-05 00:32:40.941445 | orchestrator | Monday 05 January 2026 00:32:23 +0000 (0:00:01.149) 0:07:12.519 ******** 2026-01-05 00:32:40.941454 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:40.941463 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:40.941473 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:40.941483 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:40.941492 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:40.941500 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:40.941507 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:40.941515 | orchestrator | 2026-01-05 00:32:40.941523 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-05 00:32:40.941532 | orchestrator | Monday 05 January 2026 00:32:24 +0000 (0:00:00.894) 0:07:13.413 ******** 2026-01-05 00:32:40.941540 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-05 00:32:40.941562 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-05 00:32:40.941571 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-05 00:32:40.941579 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-05 00:32:40.941587 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-05 00:32:40.941595 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-05 00:32:40.941603 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-05 00:32:40.941611 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-05 00:32:40.941619 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-05 00:32:40.941627 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-05 00:32:40.941635 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-05 00:32:40.941643 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-05 00:32:40.941650 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-05 00:32:40.941658 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-05 00:32:40.941666 | orchestrator | 2026-01-05 00:32:40.941674 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-05 00:32:40.941682 | orchestrator | Monday 05 January 2026 00:32:26 +0000 (0:00:02.517) 0:07:15.930 ******** 2026-01-05 00:32:40.941690 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:40.941698 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:40.941705 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:40.941713 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:40.941721 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:40.941729 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:40.941737 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:40.941745 | orchestrator | 2026-01-05 00:32:40.941753 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-05 00:32:40.941767 | orchestrator | Monday 05 January 2026 00:32:27 +0000 (0:00:00.773) 0:07:16.703 ******** 2026-01-05 00:32:40.941777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:40.941787 | orchestrator | 2026-01-05 00:32:40.941795 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-05 00:32:40.941803 | orchestrator | Monday 05 January 2026 00:32:28 +0000 (0:00:00.861) 0:07:17.565 ******** 2026-01-05 00:32:40.941810 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:40.941819 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:40.941826 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:40.941834 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:40.941842 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:40.941850 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:40.941858 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:40.941866 | orchestrator | 2026-01-05 00:32:40.941874 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-05 00:32:40.941882 | orchestrator | Monday 05 January 2026 00:32:29 +0000 (0:00:00.865) 0:07:18.430 ******** 2026-01-05 00:32:40.941905 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:40.941913 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:40.941921 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:40.941929 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:40.941936 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:40.941944 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:40.941952 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:40.941960 | orchestrator | 2026-01-05 00:32:40.941968 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-05 00:32:40.941976 | orchestrator | Monday 05 January 2026 00:32:30 +0000 (0:00:01.078) 0:07:19.508 ******** 2026-01-05 00:32:40.941984 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:40.941991 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:40.942061 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:40.942072 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:40.942080 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:40.942088 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:40.942096 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:40.942103 | orchestrator | 2026-01-05 00:32:40.942111 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-05 00:32:40.942119 | orchestrator | Monday 05 January 2026 00:32:31 +0000 (0:00:00.512) 0:07:20.021 ******** 2026-01-05 00:32:40.942127 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:40.942135 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:40.942143 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:40.942150 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:40.942158 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:40.942166 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:40.942173 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:40.942181 | orchestrator | 2026-01-05 00:32:40.942189 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-05 00:32:40.942197 | orchestrator | Monday 05 January 2026 00:32:32 +0000 (0:00:01.522) 0:07:21.543 ******** 2026-01-05 00:32:40.942205 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:40.942213 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:40.942221 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:40.942228 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:40.942236 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:40.942244 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:40.942251 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:40.942259 | orchestrator | 2026-01-05 00:32:40.942267 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-05 00:32:40.942275 | orchestrator | Monday 05 January 2026 00:32:33 +0000 (0:00:00.534) 0:07:22.078 ******** 2026-01-05 00:32:40.942288 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:40.942296 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:40.942304 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:40.942312 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:40.942320 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:40.942328 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:40.942342 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:14.352823 | orchestrator | 2026-01-05 00:33:14.353025 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-05 00:33:14.353041 | orchestrator | Monday 05 January 2026 00:32:40 +0000 (0:00:07.822) 0:07:29.900 ******** 2026-01-05 00:33:14.353049 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.353058 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:14.353066 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:14.353073 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:14.353080 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:14.353136 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:14.353143 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:14.353151 | orchestrator | 2026-01-05 00:33:14.353158 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-05 00:33:14.353166 | orchestrator | Monday 05 January 2026 00:32:42 +0000 (0:00:01.622) 0:07:31.523 ******** 2026-01-05 00:33:14.353173 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.353181 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:14.353188 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:14.353195 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:14.353202 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:14.353210 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:14.353218 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:14.353225 | orchestrator | 2026-01-05 00:33:14.353232 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-05 00:33:14.353240 | orchestrator | Monday 05 January 2026 00:32:44 +0000 (0:00:01.901) 0:07:33.425 ******** 2026-01-05 00:33:14.353247 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.353254 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:14.353261 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:14.353268 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:14.353275 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:14.353282 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:14.353289 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:14.353297 | orchestrator | 2026-01-05 00:33:14.353309 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:33:14.353321 | orchestrator | Monday 05 January 2026 00:32:46 +0000 (0:00:01.688) 0:07:35.114 ******** 2026-01-05 00:33:14.353334 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.353346 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:14.353358 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:14.353370 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:14.353381 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:14.353392 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:14.353403 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:14.353415 | orchestrator | 2026-01-05 00:33:14.353427 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:33:14.353438 | orchestrator | Monday 05 January 2026 00:32:47 +0000 (0:00:00.871) 0:07:35.986 ******** 2026-01-05 00:33:14.353450 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:14.353463 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:14.353475 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:14.353487 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:14.353499 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:14.353512 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:14.353523 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:14.353536 | orchestrator | 2026-01-05 00:33:14.353550 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-05 00:33:14.353596 | orchestrator | Monday 05 January 2026 00:32:48 +0000 (0:00:01.099) 0:07:37.085 ******** 2026-01-05 00:33:14.353606 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:14.353615 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:14.353623 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:14.353631 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:14.353639 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:14.353648 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:14.353656 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:14.353664 | orchestrator | 2026-01-05 00:33:14.353672 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-05 00:33:14.353681 | orchestrator | Monday 05 January 2026 00:32:48 +0000 (0:00:00.527) 0:07:37.613 ******** 2026-01-05 00:33:14.353690 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.353716 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:14.353725 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:14.353734 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:14.353743 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:14.353751 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:14.353759 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:14.353767 | orchestrator | 2026-01-05 00:33:14.353776 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-05 00:33:14.353784 | orchestrator | Monday 05 January 2026 00:32:49 +0000 (0:00:00.539) 0:07:38.152 ******** 2026-01-05 00:33:14.353791 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.353798 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:14.353805 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:14.353812 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:14.353819 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:14.353826 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:14.353833 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:14.353840 | orchestrator | 2026-01-05 00:33:14.353847 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-05 00:33:14.353854 | orchestrator | Monday 05 January 2026 00:32:49 +0000 (0:00:00.510) 0:07:38.662 ******** 2026-01-05 00:33:14.353861 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.353868 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:14.353875 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:14.353882 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:14.353913 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:14.353922 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:14.353929 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:14.353936 | orchestrator | 2026-01-05 00:33:14.353943 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-05 00:33:14.353950 | orchestrator | Monday 05 January 2026 00:32:50 +0000 (0:00:00.766) 0:07:39.429 ******** 2026-01-05 00:33:14.353958 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.353965 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:14.353971 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:14.353978 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:14.353985 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:14.353992 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:14.353999 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:14.354101 | orchestrator | 2026-01-05 00:33:14.354133 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-05 00:33:14.354141 | orchestrator | Monday 05 January 2026 00:32:55 +0000 (0:00:05.386) 0:07:44.816 ******** 2026-01-05 00:33:14.354148 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:14.354156 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:14.354163 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:14.354170 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:14.354177 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:14.354184 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:14.354191 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:14.354198 | orchestrator | 2026-01-05 00:33:14.354213 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-05 00:33:14.354221 | orchestrator | Monday 05 January 2026 00:32:56 +0000 (0:00:00.580) 0:07:45.396 ******** 2026-01-05 00:33:14.354230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:14.354240 | orchestrator | 2026-01-05 00:33:14.354247 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-05 00:33:14.354254 | orchestrator | Monday 05 January 2026 00:32:57 +0000 (0:00:01.066) 0:07:46.463 ******** 2026-01-05 00:33:14.354261 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.354294 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:14.354303 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:14.354310 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:14.354317 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:14.354324 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:14.354331 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:14.354338 | orchestrator | 2026-01-05 00:33:14.354345 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-05 00:33:14.354353 | orchestrator | Monday 05 January 2026 00:32:59 +0000 (0:00:02.029) 0:07:48.492 ******** 2026-01-05 00:33:14.354360 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.354367 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:14.354374 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:14.354381 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:14.354388 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:14.354395 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:14.354402 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:14.354409 | orchestrator | 2026-01-05 00:33:14.354417 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-05 00:33:14.354424 | orchestrator | Monday 05 January 2026 00:33:00 +0000 (0:00:01.188) 0:07:49.681 ******** 2026-01-05 00:33:14.354431 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:14.354442 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:14.354454 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:14.354467 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:14.354479 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:14.354491 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:14.354502 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:14.354514 | orchestrator | 2026-01-05 00:33:14.354526 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-05 00:33:14.354539 | orchestrator | Monday 05 January 2026 00:33:01 +0000 (0:00:00.844) 0:07:50.525 ******** 2026-01-05 00:33:14.354551 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:33:14.354566 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:33:14.354577 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:33:14.354599 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:33:14.354611 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:33:14.354622 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:33:14.354634 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:33:14.354646 | orchestrator | 2026-01-05 00:33:14.354669 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-05 00:33:14.354683 | orchestrator | Monday 05 January 2026 00:33:03 +0000 (0:00:01.956) 0:07:52.481 ******** 2026-01-05 00:33:14.354696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:14.354709 | orchestrator | 2026-01-05 00:33:14.354717 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-05 00:33:14.354724 | orchestrator | Monday 05 January 2026 00:33:04 +0000 (0:00:00.845) 0:07:53.326 ******** 2026-01-05 00:33:14.354731 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:14.354738 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:14.354746 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:14.354753 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:14.354760 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:14.354767 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:14.354774 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:14.354781 | orchestrator | 2026-01-05 00:33:14.354797 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-05 00:33:47.000641 | orchestrator | Monday 05 January 2026 00:33:14 +0000 (0:00:09.987) 0:08:03.314 ******** 2026-01-05 00:33:47.000792 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:47.000823 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:47.000845 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:47.000930 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:47.000955 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:47.000976 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:47.000997 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:47.001017 | orchestrator | 2026-01-05 00:33:47.001041 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-05 00:33:47.001063 | orchestrator | Monday 05 January 2026 00:33:16 +0000 (0:00:02.015) 0:08:05.330 ******** 2026-01-05 00:33:47.001083 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:47.001104 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:47.001126 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:47.001147 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:47.001168 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:47.001190 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:47.001212 | orchestrator | 2026-01-05 00:33:47.001235 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-05 00:33:47.001258 | orchestrator | Monday 05 January 2026 00:33:17 +0000 (0:00:01.313) 0:08:06.643 ******** 2026-01-05 00:33:47.001280 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.001303 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.001324 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.001345 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.001366 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.001387 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.001407 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.001428 | orchestrator | 2026-01-05 00:33:47.001448 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-05 00:33:47.001467 | orchestrator | 2026-01-05 00:33:47.001485 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-05 00:33:47.001505 | orchestrator | Monday 05 January 2026 00:33:18 +0000 (0:00:01.314) 0:08:07.957 ******** 2026-01-05 00:33:47.001525 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:47.001545 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:47.001563 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:47.001581 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:47.001599 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:47.001616 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:47.001635 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:47.001654 | orchestrator | 2026-01-05 00:33:47.001715 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-05 00:33:47.001736 | orchestrator | 2026-01-05 00:33:47.001755 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-05 00:33:47.001775 | orchestrator | Monday 05 January 2026 00:33:19 +0000 (0:00:00.711) 0:08:08.669 ******** 2026-01-05 00:33:47.001787 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.001798 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.001809 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.001819 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.001830 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.001842 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.001853 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.001904 | orchestrator | 2026-01-05 00:33:47.001916 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-05 00:33:47.001927 | orchestrator | Monday 05 January 2026 00:33:21 +0000 (0:00:01.435) 0:08:10.104 ******** 2026-01-05 00:33:47.001937 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:47.001948 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:47.001959 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:47.001970 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:47.001980 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:47.001991 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:47.002002 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:47.002075 | orchestrator | 2026-01-05 00:33:47.002088 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-05 00:33:47.002100 | orchestrator | Monday 05 January 2026 00:33:22 +0000 (0:00:01.577) 0:08:11.681 ******** 2026-01-05 00:33:47.002130 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:47.002141 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:47.002188 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:47.002200 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:47.002211 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:47.002222 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:47.002233 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:47.002243 | orchestrator | 2026-01-05 00:33:47.002254 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-05 00:33:47.002265 | orchestrator | Monday 05 January 2026 00:33:23 +0000 (0:00:00.547) 0:08:12.229 ******** 2026-01-05 00:33:47.002277 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:47.002289 | orchestrator | 2026-01-05 00:33:47.002300 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-05 00:33:47.002311 | orchestrator | Monday 05 January 2026 00:33:24 +0000 (0:00:01.063) 0:08:13.292 ******** 2026-01-05 00:33:47.002324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:47.002338 | orchestrator | 2026-01-05 00:33:47.002349 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-05 00:33:47.002360 | orchestrator | Monday 05 January 2026 00:33:25 +0000 (0:00:00.862) 0:08:14.154 ******** 2026-01-05 00:33:47.002371 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.002382 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.002393 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.002403 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.002414 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.002425 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.002436 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.002446 | orchestrator | 2026-01-05 00:33:47.002483 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-05 00:33:47.002495 | orchestrator | Monday 05 January 2026 00:33:34 +0000 (0:00:09.466) 0:08:23.621 ******** 2026-01-05 00:33:47.002520 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.002531 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.002541 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.002552 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.002567 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.002586 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.002603 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.002622 | orchestrator | 2026-01-05 00:33:47.002640 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-05 00:33:47.002659 | orchestrator | Monday 05 January 2026 00:33:35 +0000 (0:00:01.128) 0:08:24.749 ******** 2026-01-05 00:33:47.002677 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.002697 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.002716 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.002733 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.002753 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.002772 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.002788 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.002799 | orchestrator | 2026-01-05 00:33:47.002809 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-05 00:33:47.002820 | orchestrator | Monday 05 January 2026 00:33:37 +0000 (0:00:01.399) 0:08:26.148 ******** 2026-01-05 00:33:47.002830 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.002841 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.002852 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.002920 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.002933 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.002944 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.002954 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.002965 | orchestrator | 2026-01-05 00:33:47.002976 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-05 00:33:47.002986 | orchestrator | Monday 05 January 2026 00:33:39 +0000 (0:00:02.015) 0:08:28.164 ******** 2026-01-05 00:33:47.002997 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.003008 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.003018 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.003028 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.003039 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.003050 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.003060 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.003071 | orchestrator | 2026-01-05 00:33:47.003082 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-05 00:33:47.003092 | orchestrator | Monday 05 January 2026 00:33:40 +0000 (0:00:01.400) 0:08:29.565 ******** 2026-01-05 00:33:47.003103 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.003114 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.003125 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.003135 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.003146 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.003157 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.003167 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.003177 | orchestrator | 2026-01-05 00:33:47.003188 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-05 00:33:47.003199 | orchestrator | 2026-01-05 00:33:47.003209 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-05 00:33:47.003220 | orchestrator | Monday 05 January 2026 00:33:41 +0000 (0:00:01.231) 0:08:30.796 ******** 2026-01-05 00:33:47.003231 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:47.003242 | orchestrator | 2026-01-05 00:33:47.003253 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-05 00:33:47.003274 | orchestrator | Monday 05 January 2026 00:33:42 +0000 (0:00:00.878) 0:08:31.675 ******** 2026-01-05 00:33:47.003292 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:47.003303 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:47.003314 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:47.003325 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:47.003335 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:47.003346 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:47.003356 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:47.003367 | orchestrator | 2026-01-05 00:33:47.003378 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-05 00:33:47.003388 | orchestrator | Monday 05 January 2026 00:33:43 +0000 (0:00:01.086) 0:08:32.761 ******** 2026-01-05 00:33:47.003399 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.003410 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.003420 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.003431 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.003442 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.003452 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.003462 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.003473 | orchestrator | 2026-01-05 00:33:47.003484 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-05 00:33:47.003494 | orchestrator | Monday 05 January 2026 00:33:45 +0000 (0:00:01.262) 0:08:34.023 ******** 2026-01-05 00:33:47.003505 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:47.003516 | orchestrator | 2026-01-05 00:33:47.003527 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-05 00:33:47.003537 | orchestrator | Monday 05 January 2026 00:33:46 +0000 (0:00:01.026) 0:08:35.050 ******** 2026-01-05 00:33:47.003548 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:47.003559 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:47.003569 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:47.003580 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:47.003591 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:47.003601 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:47.003612 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:47.003622 | orchestrator | 2026-01-05 00:33:47.003643 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-05 00:33:48.649837 | orchestrator | Monday 05 January 2026 00:33:46 +0000 (0:00:00.912) 0:08:35.962 ******** 2026-01-05 00:33:48.649969 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:48.649981 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:48.649989 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:48.649998 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:48.650005 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:48.650012 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:48.650068 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:48.650076 | orchestrator | 2026-01-05 00:33:48.650085 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:33:48.650094 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-05 00:33:48.650103 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:33:48.650113 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:33:48.650126 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:33:48.650138 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-05 00:33:48.650186 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-05 00:33:48.650198 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-05 00:33:48.650208 | orchestrator | 2026-01-05 00:33:48.650219 | orchestrator | 2026-01-05 00:33:48.650230 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:33:48.650241 | orchestrator | Monday 05 January 2026 00:33:48 +0000 (0:00:01.115) 0:08:37.078 ******** 2026-01-05 00:33:48.650252 | orchestrator | =============================================================================== 2026-01-05 00:33:48.650262 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.97s 2026-01-05 00:33:48.650272 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.38s 2026-01-05 00:33:48.650283 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.34s 2026-01-05 00:33:48.650294 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.25s 2026-01-05 00:33:48.650305 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.12s 2026-01-05 00:33:48.650318 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.95s 2026-01-05 00:33:48.650329 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.22s 2026-01-05 00:33:48.650340 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.67s 2026-01-05 00:33:48.650352 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.48s 2026-01-05 00:33:48.650364 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.99s 2026-01-05 00:33:48.650393 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.47s 2026-01-05 00:33:48.650406 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.25s 2026-01-05 00:33:48.650419 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.98s 2026-01-05 00:33:48.650431 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.93s 2026-01-05 00:33:48.650444 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.35s 2026-01-05 00:33:48.650456 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.82s 2026-01-05 00:33:48.650468 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.93s 2026-01-05 00:33:48.650482 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.83s 2026-01-05 00:33:48.650499 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.52s 2026-01-05 00:33:48.650513 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.20s 2026-01-05 00:33:48.977005 | orchestrator | + osism apply fail2ban 2026-01-05 00:34:01.941388 | orchestrator | 2026-01-05 00:34:01 | INFO  | Task 991eb4a4-0f7f-46ba-b59e-e251ae5b7773 (fail2ban) was prepared for execution. 2026-01-05 00:34:01.941487 | orchestrator | 2026-01-05 00:34:01 | INFO  | It takes a moment until task 991eb4a4-0f7f-46ba-b59e-e251ae5b7773 (fail2ban) has been started and output is visible here. 2026-01-05 00:34:25.188153 | orchestrator | 2026-01-05 00:34:25.188293 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-05 00:34:25.188309 | orchestrator | 2026-01-05 00:34:25.188319 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-05 00:34:25.188329 | orchestrator | Monday 05 January 2026 00:34:06 +0000 (0:00:00.281) 0:00:00.281 ******** 2026-01-05 00:34:25.188339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:34:25.188367 | orchestrator | 2026-01-05 00:34:25.188373 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-05 00:34:25.188378 | orchestrator | Monday 05 January 2026 00:34:08 +0000 (0:00:01.233) 0:00:01.515 ******** 2026-01-05 00:34:25.188384 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:25.188390 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:25.188395 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:25.188400 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:25.188406 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:25.188411 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:25.188416 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:25.188421 | orchestrator | 2026-01-05 00:34:25.188426 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-05 00:34:25.188432 | orchestrator | Monday 05 January 2026 00:34:19 +0000 (0:00:11.663) 0:00:13.178 ******** 2026-01-05 00:34:25.188437 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:25.188442 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:25.188447 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:25.188452 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:25.188457 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:25.188462 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:25.188467 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:25.188472 | orchestrator | 2026-01-05 00:34:25.188477 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-05 00:34:25.188482 | orchestrator | Monday 05 January 2026 00:34:21 +0000 (0:00:01.521) 0:00:14.700 ******** 2026-01-05 00:34:25.188488 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:25.188494 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:25.188499 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:25.188504 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:25.188509 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:25.188514 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:25.188519 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:25.188524 | orchestrator | 2026-01-05 00:34:25.188530 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-05 00:34:25.188535 | orchestrator | Monday 05 January 2026 00:34:22 +0000 (0:00:01.615) 0:00:16.316 ******** 2026-01-05 00:34:25.188540 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:25.188545 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:25.188550 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:25.188555 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:25.188561 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:25.188566 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:25.188571 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:25.188576 | orchestrator | 2026-01-05 00:34:25.188581 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:34:25.188586 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:34:25.188604 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:34:25.188610 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:34:25.188615 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:34:25.188620 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:34:25.188626 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:34:25.188631 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:34:25.188641 | orchestrator | 2026-01-05 00:34:25.188646 | orchestrator | 2026-01-05 00:34:25.188652 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:34:25.188657 | orchestrator | Monday 05 January 2026 00:34:24 +0000 (0:00:01.821) 0:00:18.137 ******** 2026-01-05 00:34:25.188662 | orchestrator | =============================================================================== 2026-01-05 00:34:25.188667 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.66s 2026-01-05 00:34:25.188672 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.82s 2026-01-05 00:34:25.188677 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.62s 2026-01-05 00:34:25.188682 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-01-05 00:34:25.188687 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.23s 2026-01-05 00:34:25.543270 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-05 00:34:25.543413 | orchestrator | + osism apply network 2026-01-05 00:34:37.686275 | orchestrator | 2026-01-05 00:34:37 | INFO  | Task 5afd697e-199f-4102-ab0e-c4284ff015bc (network) was prepared for execution. 2026-01-05 00:34:37.686394 | orchestrator | 2026-01-05 00:34:37 | INFO  | It takes a moment until task 5afd697e-199f-4102-ab0e-c4284ff015bc (network) has been started and output is visible here. 2026-01-05 00:35:08.187960 | orchestrator | 2026-01-05 00:35:08.188088 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-05 00:35:08.188105 | orchestrator | 2026-01-05 00:35:08.188117 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-05 00:35:08.188129 | orchestrator | Monday 05 January 2026 00:34:42 +0000 (0:00:00.298) 0:00:00.298 ******** 2026-01-05 00:35:08.188140 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:08.188153 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:08.188164 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:08.188175 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:08.188186 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:08.188197 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:08.188208 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:08.188219 | orchestrator | 2026-01-05 00:35:08.188230 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-05 00:35:08.188241 | orchestrator | Monday 05 January 2026 00:34:42 +0000 (0:00:00.744) 0:00:01.043 ******** 2026-01-05 00:35:08.188255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:35:08.188268 | orchestrator | 2026-01-05 00:35:08.188280 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-05 00:35:08.188291 | orchestrator | Monday 05 January 2026 00:34:44 +0000 (0:00:01.296) 0:00:02.339 ******** 2026-01-05 00:35:08.188302 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:08.188313 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:08.188323 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:08.188334 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:08.188345 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:08.188356 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:08.188367 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:08.188378 | orchestrator | 2026-01-05 00:35:08.188389 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-05 00:35:08.188400 | orchestrator | Monday 05 January 2026 00:34:46 +0000 (0:00:02.217) 0:00:04.557 ******** 2026-01-05 00:35:08.188411 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:08.188422 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:08.188434 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:08.188448 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:08.188489 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:08.188503 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:08.188515 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:08.188534 | orchestrator | 2026-01-05 00:35:08.188553 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-05 00:35:08.188591 | orchestrator | Monday 05 January 2026 00:34:48 +0000 (0:00:01.903) 0:00:06.461 ******** 2026-01-05 00:35:08.188604 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-05 00:35:08.188618 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-05 00:35:08.188631 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-05 00:35:08.188645 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-05 00:35:08.188658 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-05 00:35:08.188671 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-05 00:35:08.188684 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-05 00:35:08.188697 | orchestrator | 2026-01-05 00:35:08.188710 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-05 00:35:08.188722 | orchestrator | Monday 05 January 2026 00:34:49 +0000 (0:00:01.017) 0:00:07.478 ******** 2026-01-05 00:35:08.188737 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 00:35:08.188757 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:35:08.188771 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 00:35:08.188784 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:35:08.188797 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 00:35:08.188833 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 00:35:08.188845 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 00:35:08.188856 | orchestrator | 2026-01-05 00:35:08.188867 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-05 00:35:08.188884 | orchestrator | Monday 05 January 2026 00:34:52 +0000 (0:00:03.562) 0:00:11.041 ******** 2026-01-05 00:35:08.188896 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:08.188907 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:08.188918 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:08.188929 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:08.188940 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:08.188951 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:08.188962 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:08.188973 | orchestrator | 2026-01-05 00:35:08.188984 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-05 00:35:08.188995 | orchestrator | Monday 05 January 2026 00:34:54 +0000 (0:00:01.763) 0:00:12.804 ******** 2026-01-05 00:35:08.189006 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:35:08.189017 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 00:35:08.189028 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 00:35:08.189038 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 00:35:08.189049 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:35:08.189060 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 00:35:08.189072 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 00:35:08.189082 | orchestrator | 2026-01-05 00:35:08.189093 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-05 00:35:08.189104 | orchestrator | Monday 05 January 2026 00:34:56 +0000 (0:00:01.848) 0:00:14.653 ******** 2026-01-05 00:35:08.189115 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:08.189126 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:08.189137 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:08.189148 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:08.189159 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:08.189170 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:08.189181 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:08.189192 | orchestrator | 2026-01-05 00:35:08.189203 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-05 00:35:08.189232 | orchestrator | Monday 05 January 2026 00:34:57 +0000 (0:00:01.180) 0:00:15.834 ******** 2026-01-05 00:35:08.189254 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:08.189265 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:08.189276 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:08.189287 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:08.189298 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:08.189309 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:08.189320 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:08.189331 | orchestrator | 2026-01-05 00:35:08.189342 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-05 00:35:08.189353 | orchestrator | Monday 05 January 2026 00:34:58 +0000 (0:00:00.704) 0:00:16.538 ******** 2026-01-05 00:35:08.189364 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:08.189375 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:08.189385 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:08.189396 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:08.189407 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:08.189418 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:08.189429 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:08.189439 | orchestrator | 2026-01-05 00:35:08.189450 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-05 00:35:08.189462 | orchestrator | Monday 05 January 2026 00:35:00 +0000 (0:00:02.463) 0:00:19.002 ******** 2026-01-05 00:35:08.189473 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:08.189484 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:08.189494 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:08.189505 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:08.189516 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:08.189527 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:08.189538 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-05 00:35:08.189551 | orchestrator | 2026-01-05 00:35:08.189562 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-05 00:35:08.189573 | orchestrator | Monday 05 January 2026 00:35:01 +0000 (0:00:00.951) 0:00:19.953 ******** 2026-01-05 00:35:08.189584 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:08.189595 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:08.189605 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:08.189616 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:08.189627 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:08.189637 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:08.189649 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:08.189659 | orchestrator | 2026-01-05 00:35:08.189670 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-05 00:35:08.189681 | orchestrator | Monday 05 January 2026 00:35:03 +0000 (0:00:01.891) 0:00:21.845 ******** 2026-01-05 00:35:08.189693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:35:08.189705 | orchestrator | 2026-01-05 00:35:08.189718 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-05 00:35:08.189738 | orchestrator | Monday 05 January 2026 00:35:05 +0000 (0:00:01.323) 0:00:23.168 ******** 2026-01-05 00:35:08.189757 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:08.189776 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:08.189793 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:08.189834 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:08.189852 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:08.189870 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:08.189889 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:08.189907 | orchestrator | 2026-01-05 00:35:08.189926 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-05 00:35:08.189956 | orchestrator | Monday 05 January 2026 00:35:06 +0000 (0:00:01.051) 0:00:24.220 ******** 2026-01-05 00:35:08.189975 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:08.189995 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:08.190106 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:08.190134 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:08.190145 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:08.190156 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:08.190174 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:08.190185 | orchestrator | 2026-01-05 00:35:08.190196 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-05 00:35:08.190207 | orchestrator | Monday 05 January 2026 00:35:06 +0000 (0:00:00.856) 0:00:25.077 ******** 2026-01-05 00:35:08.190218 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:35:08.190228 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:35:08.190239 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:35:08.190250 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:35:08.190260 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:35:08.190271 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:35:08.190282 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:35:08.190293 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:35:08.190303 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:35:08.190314 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:35:08.190325 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:35:08.190336 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:35:08.190346 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:35:08.190357 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:35:08.190368 | orchestrator | 2026-01-05 00:35:08.190391 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-05 00:35:25.875426 | orchestrator | Monday 05 January 2026 00:35:08 +0000 (0:00:01.257) 0:00:26.334 ******** 2026-01-05 00:35:25.875588 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:25.875616 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:25.875678 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:25.875700 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:25.875721 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:25.875740 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:25.875760 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:25.875780 | orchestrator | 2026-01-05 00:35:25.875827 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-05 00:35:25.875848 | orchestrator | Monday 05 January 2026 00:35:08 +0000 (0:00:00.617) 0:00:26.952 ******** 2026-01-05 00:35:25.875871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:35:25.875895 | orchestrator | 2026-01-05 00:35:25.875914 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-05 00:35:25.875933 | orchestrator | Monday 05 January 2026 00:35:13 +0000 (0:00:04.808) 0:00:31.760 ******** 2026-01-05 00:35:25.875954 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876027 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876216 | orchestrator | 2026-01-05 00:35:25.876228 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-05 00:35:25.876239 | orchestrator | Monday 05 January 2026 00:35:19 +0000 (0:00:06.159) 0:00:37.919 ******** 2026-01-05 00:35:25.876261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876272 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:35:25.876355 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:25.876407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:40.463739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:35:40.464007 | orchestrator | 2026-01-05 00:35:40.464044 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-05 00:35:40.464068 | orchestrator | Monday 05 January 2026 00:35:25 +0000 (0:00:06.100) 0:00:44.020 ******** 2026-01-05 00:35:40.464085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:35:40.464097 | orchestrator | 2026-01-05 00:35:40.464109 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-05 00:35:40.464120 | orchestrator | Monday 05 January 2026 00:35:27 +0000 (0:00:01.345) 0:00:45.366 ******** 2026-01-05 00:35:40.464131 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:40.464143 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:40.464154 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:40.464165 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:40.464177 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:40.464188 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:40.464198 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:40.464209 | orchestrator | 2026-01-05 00:35:40.464220 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-05 00:35:40.464231 | orchestrator | Monday 05 January 2026 00:35:28 +0000 (0:00:01.219) 0:00:46.585 ******** 2026-01-05 00:35:40.464245 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:35:40.464258 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:35:40.464271 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:35:40.464284 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:35:40.464296 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:40.464310 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:35:40.464322 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:35:40.464336 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:35:40.464348 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:35:40.464361 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:40.464374 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:35:40.464387 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:35:40.464400 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:35:40.464411 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:35:40.464422 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:40.464432 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:35:40.464443 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:35:40.464468 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:35:40.464479 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:35:40.464490 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:40.464501 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:35:40.464512 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:35:40.464523 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:35:40.464534 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:35:40.464553 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:40.464564 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:35:40.464575 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:35:40.464587 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:35:40.464597 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:35:40.464608 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:40.464619 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:35:40.464629 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:35:40.464640 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:35:40.464651 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:35:40.464661 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:40.464672 | orchestrator | 2026-01-05 00:35:40.464683 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-05 00:35:40.464714 | orchestrator | Monday 05 January 2026 00:35:29 +0000 (0:00:00.999) 0:00:47.584 ******** 2026-01-05 00:35:40.464726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:35:40.464738 | orchestrator | 2026-01-05 00:35:40.464749 | orchestrator | TASK [osism.commons.network : Install required packages for network-extra-init] *** 2026-01-05 00:35:40.464759 | orchestrator | Monday 05 January 2026 00:35:30 +0000 (0:00:01.313) 0:00:48.898 ******** 2026-01-05 00:35:40.464770 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:40.464781 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:40.464844 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:40.464855 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:40.464866 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:40.464876 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:40.464887 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:40.464898 | orchestrator | 2026-01-05 00:35:40.464909 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-05 00:35:40.464919 | orchestrator | Monday 05 January 2026 00:35:31 +0000 (0:00:00.679) 0:00:49.577 ******** 2026-01-05 00:35:40.464930 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:40.464941 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:40.464952 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:40.464962 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:40.464973 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:40.464984 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:40.464995 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:40.465005 | orchestrator | 2026-01-05 00:35:40.465016 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-05 00:35:40.465027 | orchestrator | Monday 05 January 2026 00:35:32 +0000 (0:00:00.851) 0:00:50.429 ******** 2026-01-05 00:35:40.465038 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:40.465048 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:40.465059 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:40.465070 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:40.465080 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:40.465091 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:40.465101 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:40.465112 | orchestrator | 2026-01-05 00:35:40.465123 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-05 00:35:40.465134 | orchestrator | Monday 05 January 2026 00:35:32 +0000 (0:00:00.659) 0:00:51.088 ******** 2026-01-05 00:35:40.465153 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:40.465164 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:40.465175 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:40.465185 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:40.465196 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:40.465207 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:40.465217 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:40.465228 | orchestrator | 2026-01-05 00:35:40.465239 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-05 00:35:40.465250 | orchestrator | Monday 05 January 2026 00:35:33 +0000 (0:00:00.890) 0:00:51.979 ******** 2026-01-05 00:35:40.465261 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:40.465272 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:40.465283 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:40.465294 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:40.465304 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:40.465315 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:40.465326 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:40.465337 | orchestrator | 2026-01-05 00:35:40.465347 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-05 00:35:40.465358 | orchestrator | Monday 05 January 2026 00:35:35 +0000 (0:00:01.607) 0:00:53.587 ******** 2026-01-05 00:35:40.465369 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:40.465380 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:40.465396 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:40.465407 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:40.465418 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:40.465428 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:40.465439 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:40.465450 | orchestrator | 2026-01-05 00:35:40.465461 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-05 00:35:40.465472 | orchestrator | Monday 05 January 2026 00:35:36 +0000 (0:00:01.237) 0:00:54.824 ******** 2026-01-05 00:35:40.465482 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:40.465493 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:40.465504 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:40.465514 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:40.465525 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:40.465536 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:40.465546 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:40.465557 | orchestrator | 2026-01-05 00:35:40.465568 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-05 00:35:40.465579 | orchestrator | Monday 05 January 2026 00:35:38 +0000 (0:00:02.290) 0:00:57.115 ******** 2026-01-05 00:35:40.465590 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:40.465601 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:40.465611 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:40.465622 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:40.465633 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:40.465643 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:40.465654 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:40.465665 | orchestrator | 2026-01-05 00:35:40.465675 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-05 00:35:40.465686 | orchestrator | Monday 05 January 2026 00:35:39 +0000 (0:00:00.699) 0:00:57.814 ******** 2026-01-05 00:35:40.465697 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:40.465708 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:40.465719 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:40.465729 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:40.465740 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:40.465751 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:40.465762 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:40.465772 | orchestrator | 2026-01-05 00:35:40.465800 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:35:40.890109 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 00:35:40.890228 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:35:40.890243 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:35:40.890254 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:35:40.890264 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:35:40.890274 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:35:40.890284 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:35:40.890294 | orchestrator | 2026-01-05 00:35:40.890305 | orchestrator | 2026-01-05 00:35:40.890316 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:35:40.890328 | orchestrator | Monday 05 January 2026 00:35:40 +0000 (0:00:00.800) 0:00:58.615 ******** 2026-01-05 00:35:40.890337 | orchestrator | =============================================================================== 2026-01-05 00:35:40.890347 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.16s 2026-01-05 00:35:40.890357 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.10s 2026-01-05 00:35:40.890366 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.81s 2026-01-05 00:35:40.890376 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.56s 2026-01-05 00:35:40.890385 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.46s 2026-01-05 00:35:40.890395 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.29s 2026-01-05 00:35:40.890404 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.22s 2026-01-05 00:35:40.890414 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.90s 2026-01-05 00:35:40.890424 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.89s 2026-01-05 00:35:40.890433 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.85s 2026-01-05 00:35:40.890443 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.76s 2026-01-05 00:35:40.890452 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.61s 2026-01-05 00:35:40.890462 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.35s 2026-01-05 00:35:40.890471 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2026-01-05 00:35:40.890481 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.31s 2026-01-05 00:35:40.890491 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.30s 2026-01-05 00:35:40.890500 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2026-01-05 00:35:40.890510 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.24s 2026-01-05 00:35:40.890523 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.22s 2026-01-05 00:35:40.890534 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.18s 2026-01-05 00:35:41.221863 | orchestrator | + osism apply wireguard 2026-01-05 00:35:53.407370 | orchestrator | 2026-01-05 00:35:53 | INFO  | Task ff870275-8950-47a2-a2dc-e2a8314c8fc2 (wireguard) was prepared for execution. 2026-01-05 00:35:53.407530 | orchestrator | 2026-01-05 00:35:53 | INFO  | It takes a moment until task ff870275-8950-47a2-a2dc-e2a8314c8fc2 (wireguard) has been started and output is visible here. 2026-01-05 00:36:14.676027 | orchestrator | 2026-01-05 00:36:14.676158 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-05 00:36:14.676174 | orchestrator | 2026-01-05 00:36:14.676187 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-05 00:36:14.676228 | orchestrator | Monday 05 January 2026 00:35:57 +0000 (0:00:00.240) 0:00:00.240 ******** 2026-01-05 00:36:14.676251 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:14.676274 | orchestrator | 2026-01-05 00:36:14.676296 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-05 00:36:14.676317 | orchestrator | Monday 05 January 2026 00:35:59 +0000 (0:00:01.629) 0:00:01.869 ******** 2026-01-05 00:36:14.676330 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:14.676342 | orchestrator | 2026-01-05 00:36:14.676353 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-05 00:36:14.676364 | orchestrator | Monday 05 January 2026 00:36:06 +0000 (0:00:07.249) 0:00:09.118 ******** 2026-01-05 00:36:14.676375 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:14.676386 | orchestrator | 2026-01-05 00:36:14.676397 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-05 00:36:14.676408 | orchestrator | Monday 05 January 2026 00:36:07 +0000 (0:00:00.579) 0:00:09.698 ******** 2026-01-05 00:36:14.676419 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:14.676430 | orchestrator | 2026-01-05 00:36:14.676441 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-05 00:36:14.676452 | orchestrator | Monday 05 January 2026 00:36:07 +0000 (0:00:00.429) 0:00:10.127 ******** 2026-01-05 00:36:14.676463 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:14.676473 | orchestrator | 2026-01-05 00:36:14.676484 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-05 00:36:14.676496 | orchestrator | Monday 05 January 2026 00:36:08 +0000 (0:00:00.682) 0:00:10.810 ******** 2026-01-05 00:36:14.676506 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:14.676517 | orchestrator | 2026-01-05 00:36:14.676528 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-05 00:36:14.676540 | orchestrator | Monday 05 January 2026 00:36:08 +0000 (0:00:00.441) 0:00:11.251 ******** 2026-01-05 00:36:14.676551 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:14.676562 | orchestrator | 2026-01-05 00:36:14.676572 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-05 00:36:14.676583 | orchestrator | Monday 05 January 2026 00:36:09 +0000 (0:00:00.415) 0:00:11.667 ******** 2026-01-05 00:36:14.676594 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:14.676605 | orchestrator | 2026-01-05 00:36:14.676616 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-05 00:36:14.676627 | orchestrator | Monday 05 January 2026 00:36:10 +0000 (0:00:01.258) 0:00:12.925 ******** 2026-01-05 00:36:14.676638 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:36:14.676650 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:14.676661 | orchestrator | 2026-01-05 00:36:14.676672 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-05 00:36:14.676683 | orchestrator | Monday 05 January 2026 00:36:11 +0000 (0:00:01.024) 0:00:13.949 ******** 2026-01-05 00:36:14.676693 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:14.676704 | orchestrator | 2026-01-05 00:36:14.676715 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-05 00:36:14.676726 | orchestrator | Monday 05 January 2026 00:36:13 +0000 (0:00:01.724) 0:00:15.673 ******** 2026-01-05 00:36:14.676737 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:14.676776 | orchestrator | 2026-01-05 00:36:14.676789 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:36:14.676800 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:14.676839 | orchestrator | 2026-01-05 00:36:14.676850 | orchestrator | 2026-01-05 00:36:14.676861 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:36:14.676872 | orchestrator | Monday 05 January 2026 00:36:14 +0000 (0:00:00.961) 0:00:16.635 ******** 2026-01-05 00:36:14.676883 | orchestrator | =============================================================================== 2026-01-05 00:36:14.676893 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.25s 2026-01-05 00:36:14.676904 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.72s 2026-01-05 00:36:14.676915 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.63s 2026-01-05 00:36:14.676925 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2026-01-05 00:36:14.676936 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.02s 2026-01-05 00:36:14.676946 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2026-01-05 00:36:14.676957 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-01-05 00:36:14.676973 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2026-01-05 00:36:14.676984 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-01-05 00:36:14.676995 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-01-05 00:36:14.677006 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-01-05 00:36:15.014166 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-05 00:36:15.050878 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-05 00:36:15.050992 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-05 00:36:15.132141 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 172 0 --:--:-- --:--:-- --:--:-- 172 2026-01-05 00:36:15.142928 | orchestrator | + osism apply --environment custom workarounds 2026-01-05 00:36:17.198584 | orchestrator | 2026-01-05 00:36:17 | INFO  | Trying to run play workarounds in environment custom 2026-01-05 00:36:27.363170 | orchestrator | 2026-01-05 00:36:27 | INFO  | Task 3d6ba5a2-5a9f-4cf6-95cc-d05ac48c2eb2 (workarounds) was prepared for execution. 2026-01-05 00:36:27.363297 | orchestrator | 2026-01-05 00:36:27 | INFO  | It takes a moment until task 3d6ba5a2-5a9f-4cf6-95cc-d05ac48c2eb2 (workarounds) has been started and output is visible here. 2026-01-05 00:36:54.189685 | orchestrator | 2026-01-05 00:36:54.189852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:36:54.189869 | orchestrator | 2026-01-05 00:36:54.189880 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-05 00:36:54.189891 | orchestrator | Monday 05 January 2026 00:36:31 +0000 (0:00:00.144) 0:00:00.144 ******** 2026-01-05 00:36:54.189902 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-05 00:36:54.189913 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-05 00:36:54.189922 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-05 00:36:54.189932 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-05 00:36:54.189942 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-05 00:36:54.189952 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-05 00:36:54.189961 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-05 00:36:54.189971 | orchestrator | 2026-01-05 00:36:54.189981 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-05 00:36:54.190014 | orchestrator | 2026-01-05 00:36:54.190069 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-05 00:36:54.190079 | orchestrator | Monday 05 January 2026 00:36:32 +0000 (0:00:00.851) 0:00:00.996 ******** 2026-01-05 00:36:54.190089 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:54.190100 | orchestrator | 2026-01-05 00:36:54.190110 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-05 00:36:54.190120 | orchestrator | 2026-01-05 00:36:54.190129 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-05 00:36:54.190139 | orchestrator | Monday 05 January 2026 00:36:35 +0000 (0:00:02.561) 0:00:03.557 ******** 2026-01-05 00:36:54.190149 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:54.190159 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:54.190168 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:54.190178 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:54.190187 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:54.190197 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:54.190207 | orchestrator | 2026-01-05 00:36:54.190216 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-05 00:36:54.190229 | orchestrator | 2026-01-05 00:36:54.190240 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-05 00:36:54.190252 | orchestrator | Monday 05 January 2026 00:36:37 +0000 (0:00:02.010) 0:00:05.567 ******** 2026-01-05 00:36:54.190264 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:54.190276 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:54.190288 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:54.190299 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:54.190311 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:54.190322 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:54.190334 | orchestrator | 2026-01-05 00:36:54.190348 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-05 00:36:54.190364 | orchestrator | Monday 05 January 2026 00:36:38 +0000 (0:00:01.566) 0:00:07.134 ******** 2026-01-05 00:36:54.190380 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:54.190396 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:54.190412 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:54.190427 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:54.190443 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:54.190458 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:54.190473 | orchestrator | 2026-01-05 00:36:54.190489 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-05 00:36:54.190524 | orchestrator | Monday 05 January 2026 00:36:42 +0000 (0:00:03.939) 0:00:11.073 ******** 2026-01-05 00:36:54.190541 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:54.190557 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:54.190573 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:54.190589 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:54.190606 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:54.190622 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:54.190637 | orchestrator | 2026-01-05 00:36:54.190655 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-05 00:36:54.190671 | orchestrator | 2026-01-05 00:36:54.190687 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-05 00:36:54.190704 | orchestrator | Monday 05 January 2026 00:36:43 +0000 (0:00:00.768) 0:00:11.842 ******** 2026-01-05 00:36:54.190747 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:54.190758 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:54.190779 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:54.190794 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:54.190809 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:54.190826 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:54.190841 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:54.190857 | orchestrator | 2026-01-05 00:36:54.190873 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-05 00:36:54.190891 | orchestrator | Monday 05 January 2026 00:36:45 +0000 (0:00:01.838) 0:00:13.680 ******** 2026-01-05 00:36:54.190907 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:54.190924 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:54.190937 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:54.190947 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:54.190956 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:54.190965 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:54.190995 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:54.191006 | orchestrator | 2026-01-05 00:36:54.191015 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-05 00:36:54.191025 | orchestrator | Monday 05 January 2026 00:36:46 +0000 (0:00:01.688) 0:00:15.369 ******** 2026-01-05 00:36:54.191034 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:54.191044 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:54.191054 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:54.191063 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:54.191072 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:54.191082 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:54.191092 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:54.191101 | orchestrator | 2026-01-05 00:36:54.191111 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-05 00:36:54.191121 | orchestrator | Monday 05 January 2026 00:36:48 +0000 (0:00:01.743) 0:00:17.113 ******** 2026-01-05 00:36:54.191130 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:54.191140 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:54.191149 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:54.191159 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:54.191168 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:54.191177 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:54.191187 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:54.191196 | orchestrator | 2026-01-05 00:36:54.191206 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-05 00:36:54.191215 | orchestrator | Monday 05 January 2026 00:36:50 +0000 (0:00:01.812) 0:00:18.925 ******** 2026-01-05 00:36:54.191224 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:54.191234 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:54.191243 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:54.191253 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:54.191262 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:54.191272 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:54.191281 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:54.191290 | orchestrator | 2026-01-05 00:36:54.191300 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-05 00:36:54.191309 | orchestrator | 2026-01-05 00:36:54.191319 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-05 00:36:54.191328 | orchestrator | Monday 05 January 2026 00:36:51 +0000 (0:00:00.669) 0:00:19.594 ******** 2026-01-05 00:36:54.191338 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:54.191347 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:54.191357 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:54.191366 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:54.191376 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:54.191385 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:54.191397 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:54.191413 | orchestrator | 2026-01-05 00:36:54.191429 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:36:54.191459 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:36:54.191478 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:54.191494 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:54.191508 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:54.191524 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:54.191541 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:54.191558 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:54.191575 | orchestrator | 2026-01-05 00:36:54.191591 | orchestrator | 2026-01-05 00:36:54.191618 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:36:54.191636 | orchestrator | Monday 05 January 2026 00:36:54 +0000 (0:00:03.016) 0:00:22.611 ******** 2026-01-05 00:36:54.191653 | orchestrator | =============================================================================== 2026-01-05 00:36:54.191669 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.94s 2026-01-05 00:36:54.191685 | orchestrator | Install python3-docker -------------------------------------------------- 3.02s 2026-01-05 00:36:54.191701 | orchestrator | Apply netplan configuration --------------------------------------------- 2.56s 2026-01-05 00:36:54.191737 | orchestrator | Apply netplan configuration --------------------------------------------- 2.01s 2026-01-05 00:36:54.191754 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.84s 2026-01-05 00:36:54.191770 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2026-01-05 00:36:54.191787 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.74s 2026-01-05 00:36:54.191803 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.69s 2026-01-05 00:36:54.191821 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.57s 2026-01-05 00:36:54.191838 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.85s 2026-01-05 00:36:54.191854 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2026-01-05 00:36:54.191881 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.67s 2026-01-05 00:36:54.972387 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-05 00:37:07.107281 | orchestrator | 2026-01-05 00:37:07 | INFO  | Task d0fc6a2e-62c0-40ed-8dca-c0041c9a53f2 (reboot) was prepared for execution. 2026-01-05 00:37:07.107402 | orchestrator | 2026-01-05 00:37:07 | INFO  | It takes a moment until task d0fc6a2e-62c0-40ed-8dca-c0041c9a53f2 (reboot) has been started and output is visible here. 2026-01-05 00:37:17.966653 | orchestrator | 2026-01-05 00:37:17.966796 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:37:17.966812 | orchestrator | 2026-01-05 00:37:17.966824 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:37:17.966836 | orchestrator | Monday 05 January 2026 00:37:11 +0000 (0:00:00.209) 0:00:00.210 ******** 2026-01-05 00:37:17.966847 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:17.966859 | orchestrator | 2026-01-05 00:37:17.966870 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:37:17.966906 | orchestrator | Monday 05 January 2026 00:37:11 +0000 (0:00:00.104) 0:00:00.314 ******** 2026-01-05 00:37:17.966918 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:17.966929 | orchestrator | 2026-01-05 00:37:17.966940 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:37:17.966951 | orchestrator | Monday 05 January 2026 00:37:12 +0000 (0:00:01.062) 0:00:01.377 ******** 2026-01-05 00:37:17.966962 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:17.966973 | orchestrator | 2026-01-05 00:37:17.966984 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:37:17.966995 | orchestrator | 2026-01-05 00:37:17.967006 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:37:17.967017 | orchestrator | Monday 05 January 2026 00:37:12 +0000 (0:00:00.154) 0:00:01.531 ******** 2026-01-05 00:37:17.967028 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:17.967039 | orchestrator | 2026-01-05 00:37:17.967050 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:37:17.967061 | orchestrator | Monday 05 January 2026 00:37:12 +0000 (0:00:00.110) 0:00:01.642 ******** 2026-01-05 00:37:17.967073 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:17.967084 | orchestrator | 2026-01-05 00:37:17.967094 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:37:17.967105 | orchestrator | Monday 05 January 2026 00:37:13 +0000 (0:00:00.712) 0:00:02.355 ******** 2026-01-05 00:37:17.967116 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:17.967127 | orchestrator | 2026-01-05 00:37:17.967138 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:37:17.967149 | orchestrator | 2026-01-05 00:37:17.967162 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:37:17.967174 | orchestrator | Monday 05 January 2026 00:37:13 +0000 (0:00:00.122) 0:00:02.477 ******** 2026-01-05 00:37:17.967188 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:17.967201 | orchestrator | 2026-01-05 00:37:17.967214 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:37:17.967226 | orchestrator | Monday 05 January 2026 00:37:13 +0000 (0:00:00.237) 0:00:02.714 ******** 2026-01-05 00:37:17.967239 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:17.967251 | orchestrator | 2026-01-05 00:37:17.967264 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:37:17.967277 | orchestrator | Monday 05 January 2026 00:37:14 +0000 (0:00:00.807) 0:00:03.522 ******** 2026-01-05 00:37:17.967290 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:17.967302 | orchestrator | 2026-01-05 00:37:17.967315 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:37:17.967327 | orchestrator | 2026-01-05 00:37:17.967340 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:37:17.967354 | orchestrator | Monday 05 January 2026 00:37:14 +0000 (0:00:00.147) 0:00:03.669 ******** 2026-01-05 00:37:17.967367 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:17.967379 | orchestrator | 2026-01-05 00:37:17.967392 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:37:17.967405 | orchestrator | Monday 05 January 2026 00:37:15 +0000 (0:00:00.111) 0:00:03.780 ******** 2026-01-05 00:37:17.967436 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:17.967449 | orchestrator | 2026-01-05 00:37:17.967462 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:37:17.967475 | orchestrator | Monday 05 January 2026 00:37:15 +0000 (0:00:00.687) 0:00:04.468 ******** 2026-01-05 00:37:17.967489 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:17.967502 | orchestrator | 2026-01-05 00:37:17.967515 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:37:17.967527 | orchestrator | 2026-01-05 00:37:17.967538 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:37:17.967569 | orchestrator | Monday 05 January 2026 00:37:15 +0000 (0:00:00.109) 0:00:04.577 ******** 2026-01-05 00:37:17.967580 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:17.967591 | orchestrator | 2026-01-05 00:37:17.967602 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:37:17.967613 | orchestrator | Monday 05 January 2026 00:37:15 +0000 (0:00:00.110) 0:00:04.688 ******** 2026-01-05 00:37:17.967624 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:17.967634 | orchestrator | 2026-01-05 00:37:17.967645 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:37:17.967656 | orchestrator | Monday 05 January 2026 00:37:16 +0000 (0:00:00.684) 0:00:05.372 ******** 2026-01-05 00:37:17.967667 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:17.967677 | orchestrator | 2026-01-05 00:37:17.967688 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:37:17.967772 | orchestrator | 2026-01-05 00:37:17.967783 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:37:17.967794 | orchestrator | Monday 05 January 2026 00:37:16 +0000 (0:00:00.111) 0:00:05.483 ******** 2026-01-05 00:37:17.967805 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:17.967816 | orchestrator | 2026-01-05 00:37:17.967827 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:37:17.967838 | orchestrator | Monday 05 January 2026 00:37:16 +0000 (0:00:00.116) 0:00:05.600 ******** 2026-01-05 00:37:17.967848 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:17.967859 | orchestrator | 2026-01-05 00:37:17.967870 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:37:17.967881 | orchestrator | Monday 05 January 2026 00:37:17 +0000 (0:00:00.671) 0:00:06.272 ******** 2026-01-05 00:37:17.967910 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:17.967922 | orchestrator | 2026-01-05 00:37:17.967933 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:37:17.967945 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:37:17.967957 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:37:17.967968 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:37:17.967979 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:37:17.967990 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:37:17.968000 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:37:17.968011 | orchestrator | 2026-01-05 00:37:17.968022 | orchestrator | 2026-01-05 00:37:17.968033 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:37:17.968044 | orchestrator | Monday 05 January 2026 00:37:17 +0000 (0:00:00.038) 0:00:06.310 ******** 2026-01-05 00:37:17.968055 | orchestrator | =============================================================================== 2026-01-05 00:37:17.968066 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.63s 2026-01-05 00:37:17.968076 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2026-01-05 00:37:17.968087 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2026-01-05 00:37:18.334348 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-05 00:37:30.457847 | orchestrator | 2026-01-05 00:37:30 | INFO  | Task 517572be-7898-4e1d-a6ed-2d7c953ebb3e (wait-for-connection) was prepared for execution. 2026-01-05 00:37:30.457985 | orchestrator | 2026-01-05 00:37:30 | INFO  | It takes a moment until task 517572be-7898-4e1d-a6ed-2d7c953ebb3e (wait-for-connection) has been started and output is visible here. 2026-01-05 00:37:46.832884 | orchestrator | 2026-01-05 00:37:46.832971 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-05 00:37:46.832979 | orchestrator | 2026-01-05 00:37:46.832983 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-05 00:37:46.832989 | orchestrator | Monday 05 January 2026 00:37:34 +0000 (0:00:00.234) 0:00:00.234 ******** 2026-01-05 00:37:46.832994 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:46.833000 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:46.833004 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:46.833008 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:46.833012 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:46.833016 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:46.833019 | orchestrator | 2026-01-05 00:37:46.833024 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:37:46.833045 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:46.833051 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:46.833055 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:46.833059 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:46.833063 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:46.833067 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:46.833071 | orchestrator | 2026-01-05 00:37:46.833074 | orchestrator | 2026-01-05 00:37:46.833078 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:37:46.833082 | orchestrator | Monday 05 January 2026 00:37:46 +0000 (0:00:11.635) 0:00:11.869 ******** 2026-01-05 00:37:46.833086 | orchestrator | =============================================================================== 2026-01-05 00:37:46.833090 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.64s 2026-01-05 00:37:47.218810 | orchestrator | + osism apply hddtemp 2026-01-05 00:37:59.394180 | orchestrator | 2026-01-05 00:37:59 | INFO  | Task 0344fc83-1f60-413e-b2d3-fb7f8be0d8dd (hddtemp) was prepared for execution. 2026-01-05 00:37:59.394317 | orchestrator | 2026-01-05 00:37:59 | INFO  | It takes a moment until task 0344fc83-1f60-413e-b2d3-fb7f8be0d8dd (hddtemp) has been started and output is visible here. 2026-01-05 00:38:04.463310 | orchestrator | 2026-01-05 00:38:04 | INFO  | Task e9335aba-9584-4b94-997a-dec4bd5a96d4 (hddtemp) was prepared for execution. 2026-01-05 00:38:04.463420 | orchestrator | 2026-01-05 00:38:04 | INFO  | It takes a moment until task e9335aba-9584-4b94-997a-dec4bd5a96d4 (hddtemp) has been started and output is visible here. 2026-01-05 00:38:45.681063 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-01-05 00:38:45.681214 | orchestrator | -vvvv to see details 2026-01-05 00:38:45.681233 | orchestrator | 2026-01-05 00:38:45.681247 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-05 00:38:45.681259 | orchestrator | 2026-01-05 00:38:45.681271 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:38:45.681328 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2026-01-05 00:38:45.681371 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2026-01-05 00:38:45.681384 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2026-01-05 00:38:45.681396 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2026-01-05 00:38:45.681407 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2026-01-05 00:38:45.681418 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2026-01-05 00:38:45.681445 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2026-01-05 00:38:45.681457 | orchestrator | 2026-01-05 00:38:45.681468 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:38:45.681480 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:38:45.681494 | orchestrator | testbed-node-0 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:38:45.681505 | orchestrator | testbed-node-1 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:38:45.681515 | orchestrator | testbed-node-2 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:38:45.681526 | orchestrator | testbed-node-3 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:38:45.681537 | orchestrator | testbed-node-4 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:38:45.681547 | orchestrator | testbed-node-5 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:38:45.681558 | orchestrator | 2026-01-05 00:38:45.681569 | orchestrator | 2026-01-05 00:38:45.681580 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-05 00:38:45.681590 | orchestrator | 2026-01-05 00:38:45.681601 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-05 00:38:45.681613 | orchestrator | Monday 05 January 2026 00:38:08 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-01-05 00:38:45.681632 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:45.681665 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:45.681676 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:45.681687 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:45.681698 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:45.681726 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:45.681738 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:45.681749 | orchestrator | 2026-01-05 00:38:45.681760 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-05 00:38:45.681771 | orchestrator | Monday 05 January 2026 00:38:09 +0000 (0:00:00.608) 0:00:00.838 ******** 2026-01-05 00:38:45.681785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:38:45.681799 | orchestrator | 2026-01-05 00:38:45.681810 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-05 00:38:45.681821 | orchestrator | Monday 05 January 2026 00:38:10 +0000 (0:00:01.045) 0:00:01.884 ******** 2026-01-05 00:38:45.681832 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:45.681843 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:45.681854 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:45.681864 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:45.681875 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:45.681886 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:45.681897 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:45.681908 | orchestrator | 2026-01-05 00:38:45.681919 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-05 00:38:45.681929 | orchestrator | Monday 05 January 2026 00:38:13 +0000 (0:00:03.040) 0:00:04.924 ******** 2026-01-05 00:38:45.681940 | orchestrator | changed: [testbed-manager] 2026-01-05 00:38:45.681953 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:45.681963 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:45.681974 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:45.681985 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:45.681996 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:45.682008 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:45.682055 | orchestrator | 2026-01-05 00:38:45.682068 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-05 00:38:45.682079 | orchestrator | Monday 05 January 2026 00:38:14 +0000 (0:00:01.252) 0:00:06.177 ******** 2026-01-05 00:38:45.682090 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:45.682101 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:45.682112 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:45.682122 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:45.682133 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:45.682144 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:45.682154 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:45.682165 | orchestrator | 2026-01-05 00:38:45.682209 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-05 00:38:45.682221 | orchestrator | Monday 05 January 2026 00:38:15 +0000 (0:00:01.414) 0:00:07.592 ******** 2026-01-05 00:38:45.682232 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:38:45.682243 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:38:45.682253 | orchestrator | changed: [testbed-manager] 2026-01-05 00:38:45.682264 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:38:45.682275 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:38:45.682285 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:38:45.682296 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:38:45.682306 | orchestrator | 2026-01-05 00:38:45.682317 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-05 00:38:45.682328 | orchestrator | Monday 05 January 2026 00:38:16 +0000 (0:00:00.889) 0:00:08.481 ******** 2026-01-05 00:38:45.682338 | orchestrator | changed: [testbed-manager] 2026-01-05 00:38:45.682359 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:45.682370 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:45.682381 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:45.682392 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:45.682402 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:45.682413 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:45.682424 | orchestrator | 2026-01-05 00:38:45.682435 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-05 00:38:45.682446 | orchestrator | Monday 05 January 2026 00:38:41 +0000 (0:00:25.157) 0:00:33.638 ******** 2026-01-05 00:38:45.682457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:38:45.682468 | orchestrator | 2026-01-05 00:38:45.682479 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-05 00:38:45.682490 | orchestrator | Monday 05 January 2026 00:38:43 +0000 (0:00:01.239) 0:00:34.878 ******** 2026-01-05 00:38:45.682501 | orchestrator | changed: [testbed-manager] 2026-01-05 00:38:45.682511 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:45.682522 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:45.682533 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:45.682543 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:45.682554 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:45.682565 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:45.682575 | orchestrator | 2026-01-05 00:38:45.682586 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:38:45.682597 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:38:45.682610 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:38:45.682622 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:38:45.682633 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:38:45.682668 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:38:46.071537 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:38:46.071732 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:38:46.071749 | orchestrator | 2026-01-05 00:38:46.071762 | orchestrator | 2026-01-05 00:38:46.071774 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:38:46.071787 | orchestrator | Monday 05 January 2026 00:38:45 +0000 (0:00:02.568) 0:00:37.446 ******** 2026-01-05 00:38:46.071798 | orchestrator | =============================================================================== 2026-01-05 00:38:46.071809 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 25.16s 2026-01-05 00:38:46.071821 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 3.04s 2026-01-05 00:38:46.071831 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.57s 2026-01-05 00:38:46.071842 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.41s 2026-01-05 00:38:46.071853 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.25s 2026-01-05 00:38:46.071864 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.24s 2026-01-05 00:38:46.071901 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.05s 2026-01-05 00:38:46.071912 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.89s 2026-01-05 00:38:46.071923 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.61s 2026-01-05 00:38:46.451501 | orchestrator | ++ semver latest 7.1.1 2026-01-05 00:38:46.513062 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:38:46.513162 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:38:46.513177 | orchestrator | + sudo systemctl restart manager.service 2026-01-05 00:39:00.701132 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 00:39:00.701234 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-05 00:39:00.701250 | orchestrator | + local max_attempts=60 2026-01-05 00:39:00.701264 | orchestrator | + local name=ceph-ansible 2026-01-05 00:39:00.701275 | orchestrator | + local attempt_num=1 2026-01-05 00:39:00.701287 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:00.740391 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:00.740485 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:00.740501 | orchestrator | + sleep 5 2026-01-05 00:39:05.744863 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:05.775784 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:05.775874 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:05.775886 | orchestrator | + sleep 5 2026-01-05 00:39:10.780510 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:10.815844 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:10.815945 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:10.815955 | orchestrator | + sleep 5 2026-01-05 00:39:15.821387 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:15.870194 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:15.870315 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:15.870356 | orchestrator | + sleep 5 2026-01-05 00:39:20.874192 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:20.897189 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:20.897286 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:20.897296 | orchestrator | + sleep 5 2026-01-05 00:39:25.901371 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:25.944829 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:25.944952 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:25.944966 | orchestrator | + sleep 5 2026-01-05 00:39:30.950059 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:30.993572 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:30.993658 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:30.993665 | orchestrator | + sleep 5 2026-01-05 00:39:35.998206 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:36.080168 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:36.080289 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:36.080306 | orchestrator | + sleep 5 2026-01-05 00:39:41.083258 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:41.107738 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:41.107833 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:41.107850 | orchestrator | + sleep 5 2026-01-05 00:39:46.110204 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:46.148051 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:46.148182 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:46.148207 | orchestrator | + sleep 5 2026-01-05 00:39:51.152869 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:51.195128 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:51.195233 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:51.195247 | orchestrator | + sleep 5 2026-01-05 00:39:56.200383 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:39:56.239787 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:39:56.239902 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:39:56.239910 | orchestrator | + sleep 5 2026-01-05 00:40:01.244308 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:01.284264 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:01.284374 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:01.284388 | orchestrator | + sleep 5 2026-01-05 00:40:06.289498 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:06.330831 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:06.331092 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 00:40:06.331132 | orchestrator | + local max_attempts=60 2026-01-05 00:40:06.331156 | orchestrator | + local name=kolla-ansible 2026-01-05 00:40:06.331175 | orchestrator | + local attempt_num=1 2026-01-05 00:40:06.331209 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 00:40:06.364818 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:06.365034 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 00:40:06.365056 | orchestrator | + local max_attempts=60 2026-01-05 00:40:06.365069 | orchestrator | + local name=osism-ansible 2026-01-05 00:40:06.365081 | orchestrator | + local attempt_num=1 2026-01-05 00:40:06.365677 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 00:40:06.399366 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:06.399457 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-05 00:40:06.399469 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-05 00:40:06.577962 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-05 00:40:06.732863 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-05 00:40:06.886338 | orchestrator | ARA in osism-ansible already disabled. 2026-01-05 00:40:07.063890 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-05 00:40:07.065415 | orchestrator | + osism apply gather-facts 2026-01-05 00:40:19.331313 | orchestrator | 2026-01-05 00:40:19 | INFO  | Task e61f504a-a669-40ab-af22-081da1c875d2 (gather-facts) was prepared for execution. 2026-01-05 00:40:19.331465 | orchestrator | 2026-01-05 00:40:19 | INFO  | It takes a moment until task e61f504a-a669-40ab-af22-081da1c875d2 (gather-facts) has been started and output is visible here. 2026-01-05 00:40:33.036764 | orchestrator | 2026-01-05 00:40:33.036887 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:40:33.036903 | orchestrator | 2026-01-05 00:40:33.036913 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:40:33.036924 | orchestrator | Monday 05 January 2026 00:40:23 +0000 (0:00:00.216) 0:00:00.216 ******** 2026-01-05 00:40:33.036934 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:33.036945 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:33.036954 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:33.036964 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:33.036974 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:33.036983 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:33.036993 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:33.037002 | orchestrator | 2026-01-05 00:40:33.037012 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:40:33.037022 | orchestrator | 2026-01-05 00:40:33.037031 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:40:33.037042 | orchestrator | Monday 05 January 2026 00:40:32 +0000 (0:00:08.602) 0:00:08.819 ******** 2026-01-05 00:40:33.037053 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:40:33.037063 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:40:33.037073 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:40:33.037082 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:40:33.037092 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:33.037101 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:33.037111 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:33.037120 | orchestrator | 2026-01-05 00:40:33.037130 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:40:33.037140 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:33.037179 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:33.037190 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:33.037215 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:33.037225 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:33.037235 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:33.037245 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:33.037254 | orchestrator | 2026-01-05 00:40:33.037264 | orchestrator | 2026-01-05 00:40:33.037274 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:40:33.037284 | orchestrator | Monday 05 January 2026 00:40:32 +0000 (0:00:00.490) 0:00:09.309 ******** 2026-01-05 00:40:33.037293 | orchestrator | =============================================================================== 2026-01-05 00:40:33.037305 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.60s 2026-01-05 00:40:33.037316 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-01-05 00:40:33.259833 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-05 00:40:33.277224 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-05 00:40:33.287812 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-05 00:40:33.296943 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-05 00:40:33.305296 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-05 00:40:33.314481 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-05 00:40:33.322839 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-05 00:40:33.334459 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-05 00:40:33.347623 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-05 00:40:33.356771 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-05 00:40:33.368667 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-05 00:40:33.379223 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-05 00:40:33.390924 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-05 00:40:33.403280 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-05 00:40:33.413485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-05 00:40:33.426714 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-05 00:40:33.437753 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-05 00:40:33.448240 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-05 00:40:33.468820 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-05 00:40:33.490570 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-05 00:40:33.512149 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-05 00:40:33.610545 | orchestrator | ok: Runtime: 0:25:31.591837 2026-01-05 00:40:33.719260 | 2026-01-05 00:40:33.719406 | TASK [Deploy services] 2026-01-05 00:40:34.256510 | orchestrator | skipping: Conditional result was False 2026-01-05 00:40:34.276071 | 2026-01-05 00:40:34.276276 | TASK [Deploy in a nutshell] 2026-01-05 00:40:35.022858 | orchestrator | + set -e 2026-01-05 00:40:35.023025 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:40:35.023036 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:40:35.023046 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:40:35.023052 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:40:35.023056 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:40:35.023073 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:40:35.023097 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:40:35.023109 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:40:35.023114 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:40:35.023122 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:40:35.023126 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:40:35.023134 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:40:35.023138 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-05 00:40:35.023147 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-05 00:40:35.023150 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-05 00:40:35.023161 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-05 00:40:35.023165 | orchestrator | ++ export ARA=false 2026-01-05 00:40:35.023169 | orchestrator | ++ ARA=false 2026-01-05 00:40:35.023173 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:40:35.023178 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:40:35.023182 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:40:35.023185 | orchestrator | ++ TEMPEST=true 2026-01-05 00:40:35.023189 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:40:35.023194 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:40:35.023198 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.38 2026-01-05 00:40:35.023202 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.38 2026-01-05 00:40:35.023205 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:40:35.023209 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:40:35.023213 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:40:35.023217 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:40:35.023223 | orchestrator | 2026-01-05 00:40:35.023227 | orchestrator | # PULL IMAGES 2026-01-05 00:40:35.023231 | orchestrator | 2026-01-05 00:40:35.023235 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:40:35.023239 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:40:35.023242 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:40:35.023250 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:40:35.023255 | orchestrator | + echo 2026-01-05 00:40:35.023259 | orchestrator | + echo '# PULL IMAGES' 2026-01-05 00:40:35.023263 | orchestrator | + echo 2026-01-05 00:40:35.024762 | orchestrator | ++ semver latest 7.0.0 2026-01-05 00:40:35.072261 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:40:35.072335 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:40:35.072343 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-05 00:40:36.862686 | orchestrator | 2026-01-05 00:40:36 | INFO  | Trying to run play pull-images in environment custom 2026-01-05 00:40:47.028341 | orchestrator | 2026-01-05 00:40:47 | INFO  | Task 8f9c4c6f-f5b4-4a27-896d-5877d79ac0e0 (pull-images) was prepared for execution. 2026-01-05 00:40:47.028470 | orchestrator | 2026-01-05 00:40:47 | INFO  | Task 8f9c4c6f-f5b4-4a27-896d-5877d79ac0e0 is running in background. No more output. Check ARA for logs. 2026-01-05 00:40:49.467463 | orchestrator | 2026-01-05 00:40:49 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-05 00:40:59.719361 | orchestrator | 2026-01-05 00:40:59 | INFO  | Task b3e1ccb5-9126-43f3-b826-7d94f0f3ad9c (wipe-partitions) was prepared for execution. 2026-01-05 00:40:59.719496 | orchestrator | 2026-01-05 00:40:59 | INFO  | It takes a moment until task b3e1ccb5-9126-43f3-b826-7d94f0f3ad9c (wipe-partitions) has been started and output is visible here. 2026-01-05 00:41:12.867933 | orchestrator | 2026-01-05 00:41:12.868067 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-05 00:41:12.868083 | orchestrator | 2026-01-05 00:41:12.868093 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-05 00:41:12.868108 | orchestrator | Monday 05 January 2026 00:41:03 +0000 (0:00:00.116) 0:00:00.116 ******** 2026-01-05 00:41:12.868119 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:41:12.868129 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:41:12.868138 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:41:12.868147 | orchestrator | 2026-01-05 00:41:12.868156 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-05 00:41:12.868190 | orchestrator | Monday 05 January 2026 00:41:04 +0000 (0:00:00.581) 0:00:00.697 ******** 2026-01-05 00:41:12.868199 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:12.868209 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:12.868222 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:12.868231 | orchestrator | 2026-01-05 00:41:12.868240 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-05 00:41:12.868248 | orchestrator | Monday 05 January 2026 00:41:04 +0000 (0:00:00.318) 0:00:01.016 ******** 2026-01-05 00:41:12.868257 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:12.868267 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:12.868276 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:12.868284 | orchestrator | 2026-01-05 00:41:12.868293 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-05 00:41:12.868302 | orchestrator | Monday 05 January 2026 00:41:05 +0000 (0:00:00.566) 0:00:01.583 ******** 2026-01-05 00:41:12.868311 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:12.868319 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:12.868328 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:12.868336 | orchestrator | 2026-01-05 00:41:12.868345 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-05 00:41:12.868353 | orchestrator | Monday 05 January 2026 00:41:05 +0000 (0:00:00.254) 0:00:01.838 ******** 2026-01-05 00:41:12.868363 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:41:12.868375 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:41:12.868383 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:41:12.868392 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:41:12.868401 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:41:12.868409 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:41:12.868417 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:41:12.868426 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:41:12.868436 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:41:12.868447 | orchestrator | 2026-01-05 00:41:12.868457 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-05 00:41:12.868468 | orchestrator | Monday 05 January 2026 00:41:07 +0000 (0:00:02.191) 0:00:04.030 ******** 2026-01-05 00:41:12.868478 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:41:12.868488 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:41:12.868498 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:41:12.868508 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:41:12.868519 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:41:12.868529 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:41:12.868539 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:41:12.868550 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:41:12.868560 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:41:12.868592 | orchestrator | 2026-01-05 00:41:12.868603 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-05 00:41:12.868613 | orchestrator | Monday 05 January 2026 00:41:09 +0000 (0:00:01.568) 0:00:05.598 ******** 2026-01-05 00:41:12.868623 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:41:12.868634 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:41:12.868643 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:41:12.868653 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:41:12.868663 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:41:12.868680 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:41:12.868690 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:41:12.868708 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:41:12.868719 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:41:12.868729 | orchestrator | 2026-01-05 00:41:12.868740 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-05 00:41:12.868750 | orchestrator | Monday 05 January 2026 00:41:11 +0000 (0:00:02.128) 0:00:07.727 ******** 2026-01-05 00:41:12.868761 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:41:12.868771 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:41:12.868782 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:41:12.868792 | orchestrator | 2026-01-05 00:41:12.868801 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-05 00:41:12.868809 | orchestrator | Monday 05 January 2026 00:41:12 +0000 (0:00:00.617) 0:00:08.345 ******** 2026-01-05 00:41:12.868818 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:41:12.868827 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:41:12.868836 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:41:12.868844 | orchestrator | 2026-01-05 00:41:12.868853 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:41:12.868864 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:12.868874 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:12.868900 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:12.868909 | orchestrator | 2026-01-05 00:41:12.868918 | orchestrator | 2026-01-05 00:41:12.868927 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:41:12.868935 | orchestrator | Monday 05 January 2026 00:41:12 +0000 (0:00:00.603) 0:00:08.948 ******** 2026-01-05 00:41:12.868944 | orchestrator | =============================================================================== 2026-01-05 00:41:12.868953 | orchestrator | Check device availability ----------------------------------------------- 2.19s 2026-01-05 00:41:12.868961 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2026-01-05 00:41:12.868970 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-01-05 00:41:12.868978 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-01-05 00:41:12.868987 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2026-01-05 00:41:12.868995 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-01-05 00:41:12.869004 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-01-05 00:41:12.869012 | orchestrator | Remove all rook related logical devices --------------------------------- 0.32s 2026-01-05 00:41:12.869021 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-01-05 00:41:25.195656 | orchestrator | 2026-01-05 00:41:25 | INFO  | Task 7f910913-bdc2-46c0-a1e2-39828a1e177e (facts) was prepared for execution. 2026-01-05 00:41:25.195761 | orchestrator | 2026-01-05 00:41:25 | INFO  | It takes a moment until task 7f910913-bdc2-46c0-a1e2-39828a1e177e (facts) has been started and output is visible here. 2026-01-05 00:41:38.189035 | orchestrator | 2026-01-05 00:41:38.189168 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 00:41:38.189184 | orchestrator | 2026-01-05 00:41:38.189196 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:41:38.189208 | orchestrator | Monday 05 January 2026 00:41:29 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-01-05 00:41:38.189220 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:41:38.189232 | orchestrator | ok: [testbed-manager] 2026-01-05 00:41:38.189243 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:41:38.189283 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:41:38.189294 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:38.189305 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:38.189316 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:38.189326 | orchestrator | 2026-01-05 00:41:38.189340 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:41:38.189351 | orchestrator | Monday 05 January 2026 00:41:30 +0000 (0:00:01.177) 0:00:01.448 ******** 2026-01-05 00:41:38.189361 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:41:38.189373 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:41:38.189384 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:41:38.189395 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:41:38.189405 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:38.189416 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:38.189427 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:38.189437 | orchestrator | 2026-01-05 00:41:38.189448 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:41:38.189459 | orchestrator | 2026-01-05 00:41:38.189470 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:41:38.189481 | orchestrator | Monday 05 January 2026 00:41:32 +0000 (0:00:01.300) 0:00:02.748 ******** 2026-01-05 00:41:38.189491 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:41:38.189502 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:41:38.189514 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:41:38.189524 | orchestrator | ok: [testbed-manager] 2026-01-05 00:41:38.189535 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:38.189546 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:38.189592 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:38.189607 | orchestrator | 2026-01-05 00:41:38.189619 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:41:38.189632 | orchestrator | 2026-01-05 00:41:38.189644 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:41:38.189678 | orchestrator | Monday 05 January 2026 00:41:37 +0000 (0:00:05.119) 0:00:07.868 ******** 2026-01-05 00:41:38.189692 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:41:38.189705 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:41:38.189717 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:41:38.189728 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:41:38.189738 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:38.189749 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:38.189760 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:38.189770 | orchestrator | 2026-01-05 00:41:38.189781 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:41:38.189792 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:38.189804 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:38.189815 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:38.189826 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:38.189837 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:38.189848 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:38.189858 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:41:38.189869 | orchestrator | 2026-01-05 00:41:38.189888 | orchestrator | 2026-01-05 00:41:38.189899 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:41:38.189910 | orchestrator | Monday 05 January 2026 00:41:37 +0000 (0:00:00.523) 0:00:08.392 ******** 2026-01-05 00:41:38.189921 | orchestrator | =============================================================================== 2026-01-05 00:41:38.189932 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.12s 2026-01-05 00:41:38.189942 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2026-01-05 00:41:38.189953 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-01-05 00:41:38.189964 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-01-05 00:41:40.531808 | orchestrator | 2026-01-05 00:41:40 | INFO  | Task 8388808d-15e6-4673-aee1-6dd89805a1ec (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-05 00:41:40.531936 | orchestrator | 2026-01-05 00:41:40 | INFO  | It takes a moment until task 8388808d-15e6-4673-aee1-6dd89805a1ec (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-05 00:41:52.226495 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:41:52.226649 | orchestrator | 2.16.14 2026-01-05 00:41:52.226665 | orchestrator | 2026-01-05 00:41:52.226676 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:41:52.226686 | orchestrator | 2026-01-05 00:41:52.226698 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:41:52.226708 | orchestrator | Monday 05 January 2026 00:41:44 +0000 (0:00:00.334) 0:00:00.334 ******** 2026-01-05 00:41:52.226718 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:41:52.226727 | orchestrator | 2026-01-05 00:41:52.226736 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:41:52.226745 | orchestrator | Monday 05 January 2026 00:41:44 +0000 (0:00:00.237) 0:00:00.572 ******** 2026-01-05 00:41:52.226753 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:52.226762 | orchestrator | 2026-01-05 00:41:52.226771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.226780 | orchestrator | Monday 05 January 2026 00:41:45 +0000 (0:00:00.243) 0:00:00.815 ******** 2026-01-05 00:41:52.226789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:41:52.226798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:41:52.226807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:41:52.226816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:41:52.226824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:41:52.226833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:41:52.226842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:41:52.226850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:41:52.226859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-05 00:41:52.226867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:41:52.226885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:41:52.226894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:41:52.226903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:41:52.226911 | orchestrator | 2026-01-05 00:41:52.226920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.226948 | orchestrator | Monday 05 January 2026 00:41:45 +0000 (0:00:00.484) 0:00:01.299 ******** 2026-01-05 00:41:52.226958 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.226966 | orchestrator | 2026-01-05 00:41:52.226975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.226984 | orchestrator | Monday 05 January 2026 00:41:45 +0000 (0:00:00.221) 0:00:01.521 ******** 2026-01-05 00:41:52.226992 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227001 | orchestrator | 2026-01-05 00:41:52.227010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227018 | orchestrator | Monday 05 January 2026 00:41:46 +0000 (0:00:00.203) 0:00:01.725 ******** 2026-01-05 00:41:52.227027 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227035 | orchestrator | 2026-01-05 00:41:52.227044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227058 | orchestrator | Monday 05 January 2026 00:41:46 +0000 (0:00:00.207) 0:00:01.932 ******** 2026-01-05 00:41:52.227067 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227075 | orchestrator | 2026-01-05 00:41:52.227084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227093 | orchestrator | Monday 05 January 2026 00:41:46 +0000 (0:00:00.222) 0:00:02.155 ******** 2026-01-05 00:41:52.227102 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227111 | orchestrator | 2026-01-05 00:41:52.227129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227138 | orchestrator | Monday 05 January 2026 00:41:46 +0000 (0:00:00.206) 0:00:02.361 ******** 2026-01-05 00:41:52.227147 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227156 | orchestrator | 2026-01-05 00:41:52.227164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227173 | orchestrator | Monday 05 January 2026 00:41:46 +0000 (0:00:00.225) 0:00:02.586 ******** 2026-01-05 00:41:52.227181 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227190 | orchestrator | 2026-01-05 00:41:52.227199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227208 | orchestrator | Monday 05 January 2026 00:41:47 +0000 (0:00:00.191) 0:00:02.777 ******** 2026-01-05 00:41:52.227216 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227225 | orchestrator | 2026-01-05 00:41:52.227233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227242 | orchestrator | Monday 05 January 2026 00:41:47 +0000 (0:00:00.203) 0:00:02.981 ******** 2026-01-05 00:41:52.227251 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c) 2026-01-05 00:41:52.227261 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c) 2026-01-05 00:41:52.227269 | orchestrator | 2026-01-05 00:41:52.227278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227304 | orchestrator | Monday 05 January 2026 00:41:47 +0000 (0:00:00.406) 0:00:03.387 ******** 2026-01-05 00:41:52.227314 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20) 2026-01-05 00:41:52.227323 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20) 2026-01-05 00:41:52.227331 | orchestrator | 2026-01-05 00:41:52.227340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227349 | orchestrator | Monday 05 January 2026 00:41:48 +0000 (0:00:00.667) 0:00:04.055 ******** 2026-01-05 00:41:52.227357 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2) 2026-01-05 00:41:52.227366 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2) 2026-01-05 00:41:52.227375 | orchestrator | 2026-01-05 00:41:52.227383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227399 | orchestrator | Monday 05 January 2026 00:41:49 +0000 (0:00:00.676) 0:00:04.732 ******** 2026-01-05 00:41:52.227408 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8) 2026-01-05 00:41:52.227417 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8) 2026-01-05 00:41:52.227425 | orchestrator | 2026-01-05 00:41:52.227434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:52.227442 | orchestrator | Monday 05 January 2026 00:41:50 +0000 (0:00:00.895) 0:00:05.627 ******** 2026-01-05 00:41:52.227451 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:41:52.227460 | orchestrator | 2026-01-05 00:41:52.227473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:52.227482 | orchestrator | Monday 05 January 2026 00:41:50 +0000 (0:00:00.329) 0:00:05.957 ******** 2026-01-05 00:41:52.227491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:41:52.227500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:41:52.227508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:41:52.227517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:41:52.227525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:41:52.227534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:41:52.227542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:41:52.227589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:41:52.227599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-05 00:41:52.227608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:41:52.227616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:41:52.227625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:41:52.227633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:41:52.227642 | orchestrator | 2026-01-05 00:41:52.227651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:52.227659 | orchestrator | Monday 05 January 2026 00:41:50 +0000 (0:00:00.403) 0:00:06.360 ******** 2026-01-05 00:41:52.227668 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227677 | orchestrator | 2026-01-05 00:41:52.227685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:52.227694 | orchestrator | Monday 05 January 2026 00:41:50 +0000 (0:00:00.203) 0:00:06.564 ******** 2026-01-05 00:41:52.227702 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227711 | orchestrator | 2026-01-05 00:41:52.227719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:52.227728 | orchestrator | Monday 05 January 2026 00:41:51 +0000 (0:00:00.226) 0:00:06.790 ******** 2026-01-05 00:41:52.227737 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227745 | orchestrator | 2026-01-05 00:41:52.227754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:52.227763 | orchestrator | Monday 05 January 2026 00:41:51 +0000 (0:00:00.217) 0:00:07.008 ******** 2026-01-05 00:41:52.227771 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227780 | orchestrator | 2026-01-05 00:41:52.227788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:52.227797 | orchestrator | Monday 05 January 2026 00:41:51 +0000 (0:00:00.201) 0:00:07.209 ******** 2026-01-05 00:41:52.227812 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227820 | orchestrator | 2026-01-05 00:41:52.227829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:52.227837 | orchestrator | Monday 05 January 2026 00:41:51 +0000 (0:00:00.206) 0:00:07.416 ******** 2026-01-05 00:41:52.227846 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227855 | orchestrator | 2026-01-05 00:41:52.227864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:52.227872 | orchestrator | Monday 05 January 2026 00:41:52 +0000 (0:00:00.197) 0:00:07.613 ******** 2026-01-05 00:41:52.227881 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:52.227889 | orchestrator | 2026-01-05 00:41:52.227903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:59.362342 | orchestrator | Monday 05 January 2026 00:41:52 +0000 (0:00:00.202) 0:00:07.816 ******** 2026-01-05 00:41:59.362452 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.362467 | orchestrator | 2026-01-05 00:41:59.362478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:59.362488 | orchestrator | Monday 05 January 2026 00:41:52 +0000 (0:00:00.208) 0:00:08.024 ******** 2026-01-05 00:41:59.362499 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-05 00:41:59.362510 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-05 00:41:59.362520 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-05 00:41:59.362530 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-05 00:41:59.362540 | orchestrator | 2026-01-05 00:41:59.362592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:59.362603 | orchestrator | Monday 05 January 2026 00:41:53 +0000 (0:00:01.067) 0:00:09.091 ******** 2026-01-05 00:41:59.362613 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.362623 | orchestrator | 2026-01-05 00:41:59.362633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:59.362643 | orchestrator | Monday 05 January 2026 00:41:53 +0000 (0:00:00.198) 0:00:09.290 ******** 2026-01-05 00:41:59.362653 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.362662 | orchestrator | 2026-01-05 00:41:59.362672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:59.362682 | orchestrator | Monday 05 January 2026 00:41:53 +0000 (0:00:00.205) 0:00:09.495 ******** 2026-01-05 00:41:59.362691 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.362700 | orchestrator | 2026-01-05 00:41:59.362710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:59.362726 | orchestrator | Monday 05 January 2026 00:41:54 +0000 (0:00:00.235) 0:00:09.731 ******** 2026-01-05 00:41:59.362744 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.362762 | orchestrator | 2026-01-05 00:41:59.362777 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:41:59.362795 | orchestrator | Monday 05 January 2026 00:41:54 +0000 (0:00:00.200) 0:00:09.932 ******** 2026-01-05 00:41:59.362812 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:41:59.362829 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:41:59.362847 | orchestrator | 2026-01-05 00:41:59.362891 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:41:59.362911 | orchestrator | Monday 05 January 2026 00:41:54 +0000 (0:00:00.178) 0:00:10.110 ******** 2026-01-05 00:41:59.362928 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.362944 | orchestrator | 2026-01-05 00:41:59.362961 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:41:59.362977 | orchestrator | Monday 05 January 2026 00:41:54 +0000 (0:00:00.119) 0:00:10.230 ******** 2026-01-05 00:41:59.362992 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.363008 | orchestrator | 2026-01-05 00:41:59.363023 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:41:59.363072 | orchestrator | Monday 05 January 2026 00:41:54 +0000 (0:00:00.123) 0:00:10.353 ******** 2026-01-05 00:41:59.363090 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.363105 | orchestrator | 2026-01-05 00:41:59.363121 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:41:59.363137 | orchestrator | Monday 05 January 2026 00:41:54 +0000 (0:00:00.103) 0:00:10.457 ******** 2026-01-05 00:41:59.363153 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:59.363169 | orchestrator | 2026-01-05 00:41:59.363187 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:41:59.363204 | orchestrator | Monday 05 January 2026 00:41:54 +0000 (0:00:00.106) 0:00:10.564 ******** 2026-01-05 00:41:59.363222 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}}) 2026-01-05 00:41:59.363238 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0807b7d-156a-51e9-a1ef-1ae613918df1'}}) 2026-01-05 00:41:59.363255 | orchestrator | 2026-01-05 00:41:59.363272 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:41:59.363288 | orchestrator | Monday 05 January 2026 00:41:55 +0000 (0:00:00.189) 0:00:10.754 ******** 2026-01-05 00:41:59.363306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}})  2026-01-05 00:41:59.363328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0807b7d-156a-51e9-a1ef-1ae613918df1'}})  2026-01-05 00:41:59.363337 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.363347 | orchestrator | 2026-01-05 00:41:59.363356 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:41:59.363366 | orchestrator | Monday 05 January 2026 00:41:55 +0000 (0:00:00.143) 0:00:10.897 ******** 2026-01-05 00:41:59.363376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}})  2026-01-05 00:41:59.363385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0807b7d-156a-51e9-a1ef-1ae613918df1'}})  2026-01-05 00:41:59.363400 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.363416 | orchestrator | 2026-01-05 00:41:59.363433 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:41:59.363448 | orchestrator | Monday 05 January 2026 00:41:55 +0000 (0:00:00.306) 0:00:11.204 ******** 2026-01-05 00:41:59.363464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}})  2026-01-05 00:41:59.363506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0807b7d-156a-51e9-a1ef-1ae613918df1'}})  2026-01-05 00:41:59.363524 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.363540 | orchestrator | 2026-01-05 00:41:59.363586 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:41:59.363728 | orchestrator | Monday 05 January 2026 00:41:55 +0000 (0:00:00.143) 0:00:11.348 ******** 2026-01-05 00:41:59.363757 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:59.363774 | orchestrator | 2026-01-05 00:41:59.363790 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:41:59.363806 | orchestrator | Monday 05 January 2026 00:41:55 +0000 (0:00:00.123) 0:00:11.471 ******** 2026-01-05 00:41:59.363822 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:59.363838 | orchestrator | 2026-01-05 00:41:59.363854 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:41:59.363871 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.151) 0:00:11.623 ******** 2026-01-05 00:41:59.363887 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.363905 | orchestrator | 2026-01-05 00:41:59.363920 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:41:59.363937 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.102) 0:00:11.725 ******** 2026-01-05 00:41:59.363972 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.363990 | orchestrator | 2026-01-05 00:41:59.364007 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:41:59.364025 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.130) 0:00:11.856 ******** 2026-01-05 00:41:59.364041 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.364058 | orchestrator | 2026-01-05 00:41:59.364074 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:41:59.364090 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.124) 0:00:11.980 ******** 2026-01-05 00:41:59.364106 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:41:59.364121 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:41:59.364131 | orchestrator |  "sdb": { 2026-01-05 00:41:59.364141 | orchestrator |  "osd_lvm_uuid": "3c0354e6-1633-54b4-ae3c-130b25b2cb6c" 2026-01-05 00:41:59.364151 | orchestrator |  }, 2026-01-05 00:41:59.364160 | orchestrator |  "sdc": { 2026-01-05 00:41:59.364170 | orchestrator |  "osd_lvm_uuid": "a0807b7d-156a-51e9-a1ef-1ae613918df1" 2026-01-05 00:41:59.364186 | orchestrator |  } 2026-01-05 00:41:59.364201 | orchestrator |  } 2026-01-05 00:41:59.364219 | orchestrator | } 2026-01-05 00:41:59.364234 | orchestrator | 2026-01-05 00:41:59.364250 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:41:59.364265 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.125) 0:00:12.106 ******** 2026-01-05 00:41:59.364332 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.364354 | orchestrator | 2026-01-05 00:41:59.364371 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:41:59.364387 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.112) 0:00:12.218 ******** 2026-01-05 00:41:59.364403 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.364419 | orchestrator | 2026-01-05 00:41:59.364434 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:41:59.364451 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.123) 0:00:12.341 ******** 2026-01-05 00:41:59.364468 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:59.364484 | orchestrator | 2026-01-05 00:41:59.364500 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:41:59.364516 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.131) 0:00:12.473 ******** 2026-01-05 00:41:59.364531 | orchestrator | changed: [testbed-node-3] => { 2026-01-05 00:41:59.364583 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:41:59.364602 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:41:59.364618 | orchestrator |  "sdb": { 2026-01-05 00:41:59.364635 | orchestrator |  "osd_lvm_uuid": "3c0354e6-1633-54b4-ae3c-130b25b2cb6c" 2026-01-05 00:41:59.364652 | orchestrator |  }, 2026-01-05 00:41:59.364669 | orchestrator |  "sdc": { 2026-01-05 00:41:59.364685 | orchestrator |  "osd_lvm_uuid": "a0807b7d-156a-51e9-a1ef-1ae613918df1" 2026-01-05 00:41:59.364700 | orchestrator |  } 2026-01-05 00:41:59.364718 | orchestrator |  }, 2026-01-05 00:41:59.364734 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:41:59.364752 | orchestrator |  { 2026-01-05 00:41:59.364769 | orchestrator |  "data": "osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c", 2026-01-05 00:41:59.364787 | orchestrator |  "data_vg": "ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c" 2026-01-05 00:41:59.364803 | orchestrator |  }, 2026-01-05 00:41:59.364820 | orchestrator |  { 2026-01-05 00:41:59.364837 | orchestrator |  "data": "osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1", 2026-01-05 00:41:59.364853 | orchestrator |  "data_vg": "ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1" 2026-01-05 00:41:59.364880 | orchestrator |  } 2026-01-05 00:41:59.364890 | orchestrator |  ] 2026-01-05 00:41:59.364900 | orchestrator |  } 2026-01-05 00:41:59.364922 | orchestrator | } 2026-01-05 00:41:59.364932 | orchestrator | 2026-01-05 00:41:59.364941 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:41:59.364951 | orchestrator | Monday 05 January 2026 00:41:57 +0000 (0:00:00.321) 0:00:12.795 ******** 2026-01-05 00:41:59.364961 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:41:59.364970 | orchestrator | 2026-01-05 00:41:59.364980 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:41:59.364989 | orchestrator | 2026-01-05 00:41:59.364999 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:41:59.365008 | orchestrator | Monday 05 January 2026 00:41:58 +0000 (0:00:01.729) 0:00:14.524 ******** 2026-01-05 00:41:59.365025 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:41:59.365041 | orchestrator | 2026-01-05 00:41:59.365058 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:41:59.365074 | orchestrator | Monday 05 January 2026 00:41:59 +0000 (0:00:00.225) 0:00:14.750 ******** 2026-01-05 00:41:59.365090 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:59.365106 | orchestrator | 2026-01-05 00:41:59.365140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.584375 | orchestrator | Monday 05 January 2026 00:41:59 +0000 (0:00:00.206) 0:00:14.956 ******** 2026-01-05 00:42:07.584503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:42:07.584520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:42:07.584533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:42:07.584595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:42:07.584607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:42:07.584618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:42:07.584630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:42:07.584641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:42:07.584652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-05 00:42:07.584663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:42:07.584674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:42:07.584690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:42:07.584702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:42:07.584713 | orchestrator | 2026-01-05 00:42:07.584726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.584737 | orchestrator | Monday 05 January 2026 00:41:59 +0000 (0:00:00.380) 0:00:15.336 ******** 2026-01-05 00:42:07.584748 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.584760 | orchestrator | 2026-01-05 00:42:07.584771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.584782 | orchestrator | Monday 05 January 2026 00:41:59 +0000 (0:00:00.186) 0:00:15.523 ******** 2026-01-05 00:42:07.584793 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.584804 | orchestrator | 2026-01-05 00:42:07.584816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.584829 | orchestrator | Monday 05 January 2026 00:42:00 +0000 (0:00:00.180) 0:00:15.704 ******** 2026-01-05 00:42:07.584842 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.584856 | orchestrator | 2026-01-05 00:42:07.584869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.584909 | orchestrator | Monday 05 January 2026 00:42:00 +0000 (0:00:00.180) 0:00:15.884 ******** 2026-01-05 00:42:07.584923 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.584936 | orchestrator | 2026-01-05 00:42:07.584949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.584962 | orchestrator | Monday 05 January 2026 00:42:00 +0000 (0:00:00.181) 0:00:16.066 ******** 2026-01-05 00:42:07.584974 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.584987 | orchestrator | 2026-01-05 00:42:07.585000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.585013 | orchestrator | Monday 05 January 2026 00:42:01 +0000 (0:00:00.685) 0:00:16.751 ******** 2026-01-05 00:42:07.585025 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585038 | orchestrator | 2026-01-05 00:42:07.585073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.585086 | orchestrator | Monday 05 January 2026 00:42:01 +0000 (0:00:00.210) 0:00:16.962 ******** 2026-01-05 00:42:07.585099 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585113 | orchestrator | 2026-01-05 00:42:07.585125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.585138 | orchestrator | Monday 05 January 2026 00:42:01 +0000 (0:00:00.216) 0:00:17.178 ******** 2026-01-05 00:42:07.585150 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585164 | orchestrator | 2026-01-05 00:42:07.585176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.585187 | orchestrator | Monday 05 January 2026 00:42:01 +0000 (0:00:00.198) 0:00:17.377 ******** 2026-01-05 00:42:07.585198 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57) 2026-01-05 00:42:07.585210 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57) 2026-01-05 00:42:07.585221 | orchestrator | 2026-01-05 00:42:07.585232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.585243 | orchestrator | Monday 05 January 2026 00:42:02 +0000 (0:00:00.433) 0:00:17.810 ******** 2026-01-05 00:42:07.585254 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1) 2026-01-05 00:42:07.585265 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1) 2026-01-05 00:42:07.585276 | orchestrator | 2026-01-05 00:42:07.585286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.585297 | orchestrator | Monday 05 January 2026 00:42:02 +0000 (0:00:00.479) 0:00:18.289 ******** 2026-01-05 00:42:07.585308 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613) 2026-01-05 00:42:07.585319 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613) 2026-01-05 00:42:07.585329 | orchestrator | 2026-01-05 00:42:07.585340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.585371 | orchestrator | Monday 05 January 2026 00:42:03 +0000 (0:00:00.489) 0:00:18.779 ******** 2026-01-05 00:42:07.585382 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763) 2026-01-05 00:42:07.585394 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763) 2026-01-05 00:42:07.585405 | orchestrator | 2026-01-05 00:42:07.585416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:07.585427 | orchestrator | Monday 05 January 2026 00:42:03 +0000 (0:00:00.467) 0:00:19.247 ******** 2026-01-05 00:42:07.585438 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:42:07.585449 | orchestrator | 2026-01-05 00:42:07.585460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585471 | orchestrator | Monday 05 January 2026 00:42:03 +0000 (0:00:00.339) 0:00:19.587 ******** 2026-01-05 00:42:07.585490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:42:07.585501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:42:07.585512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:42:07.585522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:42:07.585533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:42:07.585567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:42:07.585578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:42:07.585589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:42:07.585600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-05 00:42:07.585611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:42:07.585622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:42:07.585632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:42:07.585643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:42:07.585654 | orchestrator | 2026-01-05 00:42:07.585665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585676 | orchestrator | Monday 05 January 2026 00:42:04 +0000 (0:00:00.397) 0:00:19.985 ******** 2026-01-05 00:42:07.585687 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585698 | orchestrator | 2026-01-05 00:42:07.585709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585726 | orchestrator | Monday 05 January 2026 00:42:05 +0000 (0:00:00.684) 0:00:20.669 ******** 2026-01-05 00:42:07.585738 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585748 | orchestrator | 2026-01-05 00:42:07.585759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585770 | orchestrator | Monday 05 January 2026 00:42:05 +0000 (0:00:00.206) 0:00:20.876 ******** 2026-01-05 00:42:07.585781 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585792 | orchestrator | 2026-01-05 00:42:07.585803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585814 | orchestrator | Monday 05 January 2026 00:42:05 +0000 (0:00:00.204) 0:00:21.081 ******** 2026-01-05 00:42:07.585825 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585836 | orchestrator | 2026-01-05 00:42:07.585847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585858 | orchestrator | Monday 05 January 2026 00:42:05 +0000 (0:00:00.194) 0:00:21.276 ******** 2026-01-05 00:42:07.585868 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585879 | orchestrator | 2026-01-05 00:42:07.585890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585901 | orchestrator | Monday 05 January 2026 00:42:05 +0000 (0:00:00.224) 0:00:21.500 ******** 2026-01-05 00:42:07.585912 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585923 | orchestrator | 2026-01-05 00:42:07.585934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585945 | orchestrator | Monday 05 January 2026 00:42:06 +0000 (0:00:00.213) 0:00:21.714 ******** 2026-01-05 00:42:07.585955 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.585966 | orchestrator | 2026-01-05 00:42:07.585977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.585988 | orchestrator | Monday 05 January 2026 00:42:06 +0000 (0:00:00.198) 0:00:21.912 ******** 2026-01-05 00:42:07.586007 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:07.586085 | orchestrator | 2026-01-05 00:42:07.586099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.586110 | orchestrator | Monday 05 January 2026 00:42:06 +0000 (0:00:00.205) 0:00:22.118 ******** 2026-01-05 00:42:07.586121 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-05 00:42:07.586133 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-05 00:42:07.586144 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-05 00:42:07.586155 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-05 00:42:07.586166 | orchestrator | 2026-01-05 00:42:07.586177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:07.586188 | orchestrator | Monday 05 January 2026 00:42:07 +0000 (0:00:00.849) 0:00:22.967 ******** 2026-01-05 00:42:07.586199 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.446292 | orchestrator | 2026-01-05 00:42:13.446408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:13.446425 | orchestrator | Monday 05 January 2026 00:42:07 +0000 (0:00:00.212) 0:00:23.180 ******** 2026-01-05 00:42:13.446437 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.446450 | orchestrator | 2026-01-05 00:42:13.446460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:13.446472 | orchestrator | Monday 05 January 2026 00:42:07 +0000 (0:00:00.193) 0:00:23.373 ******** 2026-01-05 00:42:13.446483 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.446494 | orchestrator | 2026-01-05 00:42:13.446505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:13.446516 | orchestrator | Monday 05 January 2026 00:42:07 +0000 (0:00:00.193) 0:00:23.567 ******** 2026-01-05 00:42:13.446527 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.446538 | orchestrator | 2026-01-05 00:42:13.446601 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:42:13.446612 | orchestrator | Monday 05 January 2026 00:42:08 +0000 (0:00:00.794) 0:00:24.361 ******** 2026-01-05 00:42:13.446623 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:42:13.446634 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:42:13.446644 | orchestrator | 2026-01-05 00:42:13.446655 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:42:13.446666 | orchestrator | Monday 05 January 2026 00:42:08 +0000 (0:00:00.178) 0:00:24.540 ******** 2026-01-05 00:42:13.446677 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.446689 | orchestrator | 2026-01-05 00:42:13.446700 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:42:13.446710 | orchestrator | Monday 05 January 2026 00:42:09 +0000 (0:00:00.131) 0:00:24.671 ******** 2026-01-05 00:42:13.446721 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.446732 | orchestrator | 2026-01-05 00:42:13.446743 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:42:13.446753 | orchestrator | Monday 05 January 2026 00:42:09 +0000 (0:00:00.138) 0:00:24.809 ******** 2026-01-05 00:42:13.446764 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.446775 | orchestrator | 2026-01-05 00:42:13.446786 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:42:13.446797 | orchestrator | Monday 05 January 2026 00:42:09 +0000 (0:00:00.165) 0:00:24.975 ******** 2026-01-05 00:42:13.446808 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:42:13.446822 | orchestrator | 2026-01-05 00:42:13.446836 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:42:13.446848 | orchestrator | Monday 05 January 2026 00:42:09 +0000 (0:00:00.156) 0:00:25.131 ******** 2026-01-05 00:42:13.446862 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}}) 2026-01-05 00:42:13.446876 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7959794c-cc9c-59d9-9b66-2faefa464ed4'}}) 2026-01-05 00:42:13.446915 | orchestrator | 2026-01-05 00:42:13.446930 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:42:13.446942 | orchestrator | Monday 05 January 2026 00:42:09 +0000 (0:00:00.181) 0:00:25.313 ******** 2026-01-05 00:42:13.446956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}})  2026-01-05 00:42:13.446990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7959794c-cc9c-59d9-9b66-2faefa464ed4'}})  2026-01-05 00:42:13.447015 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447028 | orchestrator | 2026-01-05 00:42:13.447040 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:42:13.447053 | orchestrator | Monday 05 January 2026 00:42:09 +0000 (0:00:00.131) 0:00:25.445 ******** 2026-01-05 00:42:13.447066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}})  2026-01-05 00:42:13.447078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7959794c-cc9c-59d9-9b66-2faefa464ed4'}})  2026-01-05 00:42:13.447091 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447103 | orchestrator | 2026-01-05 00:42:13.447115 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:42:13.447128 | orchestrator | Monday 05 January 2026 00:42:09 +0000 (0:00:00.120) 0:00:25.565 ******** 2026-01-05 00:42:13.447167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}})  2026-01-05 00:42:13.447179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7959794c-cc9c-59d9-9b66-2faefa464ed4'}})  2026-01-05 00:42:13.447190 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447201 | orchestrator | 2026-01-05 00:42:13.447211 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:42:13.447222 | orchestrator | Monday 05 January 2026 00:42:10 +0000 (0:00:00.121) 0:00:25.687 ******** 2026-01-05 00:42:13.447233 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:42:13.447244 | orchestrator | 2026-01-05 00:42:13.447254 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:42:13.447265 | orchestrator | Monday 05 January 2026 00:42:10 +0000 (0:00:00.146) 0:00:25.833 ******** 2026-01-05 00:42:13.447276 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:42:13.447287 | orchestrator | 2026-01-05 00:42:13.447298 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:42:13.447309 | orchestrator | Monday 05 January 2026 00:42:10 +0000 (0:00:00.130) 0:00:25.964 ******** 2026-01-05 00:42:13.447337 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447349 | orchestrator | 2026-01-05 00:42:13.447359 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:42:13.447370 | orchestrator | Monday 05 January 2026 00:42:10 +0000 (0:00:00.236) 0:00:26.201 ******** 2026-01-05 00:42:13.447381 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447391 | orchestrator | 2026-01-05 00:42:13.447402 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:42:13.447413 | orchestrator | Monday 05 January 2026 00:42:10 +0000 (0:00:00.097) 0:00:26.298 ******** 2026-01-05 00:42:13.447424 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447434 | orchestrator | 2026-01-05 00:42:13.447445 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:42:13.447456 | orchestrator | Monday 05 January 2026 00:42:10 +0000 (0:00:00.100) 0:00:26.399 ******** 2026-01-05 00:42:13.447467 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:42:13.447478 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:42:13.447489 | orchestrator |  "sdb": { 2026-01-05 00:42:13.447501 | orchestrator |  "osd_lvm_uuid": "f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4" 2026-01-05 00:42:13.447523 | orchestrator |  }, 2026-01-05 00:42:13.447534 | orchestrator |  "sdc": { 2026-01-05 00:42:13.447570 | orchestrator |  "osd_lvm_uuid": "7959794c-cc9c-59d9-9b66-2faefa464ed4" 2026-01-05 00:42:13.447582 | orchestrator |  } 2026-01-05 00:42:13.447593 | orchestrator |  } 2026-01-05 00:42:13.447604 | orchestrator | } 2026-01-05 00:42:13.447615 | orchestrator | 2026-01-05 00:42:13.447625 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:42:13.447636 | orchestrator | Monday 05 January 2026 00:42:10 +0000 (0:00:00.108) 0:00:26.507 ******** 2026-01-05 00:42:13.447647 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447657 | orchestrator | 2026-01-05 00:42:13.447668 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:42:13.447679 | orchestrator | Monday 05 January 2026 00:42:11 +0000 (0:00:00.109) 0:00:26.616 ******** 2026-01-05 00:42:13.447689 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447700 | orchestrator | 2026-01-05 00:42:13.447710 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:42:13.447721 | orchestrator | Monday 05 January 2026 00:42:11 +0000 (0:00:00.107) 0:00:26.724 ******** 2026-01-05 00:42:13.447732 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:13.447742 | orchestrator | 2026-01-05 00:42:13.447753 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:42:13.447764 | orchestrator | Monday 05 January 2026 00:42:11 +0000 (0:00:00.117) 0:00:26.841 ******** 2026-01-05 00:42:13.447774 | orchestrator | changed: [testbed-node-4] => { 2026-01-05 00:42:13.447785 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:42:13.447796 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:42:13.447807 | orchestrator |  "sdb": { 2026-01-05 00:42:13.447818 | orchestrator |  "osd_lvm_uuid": "f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4" 2026-01-05 00:42:13.447828 | orchestrator |  }, 2026-01-05 00:42:13.447839 | orchestrator |  "sdc": { 2026-01-05 00:42:13.447850 | orchestrator |  "osd_lvm_uuid": "7959794c-cc9c-59d9-9b66-2faefa464ed4" 2026-01-05 00:42:13.447861 | orchestrator |  } 2026-01-05 00:42:13.447871 | orchestrator |  }, 2026-01-05 00:42:13.447882 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:42:13.447893 | orchestrator |  { 2026-01-05 00:42:13.447904 | orchestrator |  "data": "osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4", 2026-01-05 00:42:13.447914 | orchestrator |  "data_vg": "ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4" 2026-01-05 00:42:13.447925 | orchestrator |  }, 2026-01-05 00:42:13.447936 | orchestrator |  { 2026-01-05 00:42:13.447946 | orchestrator |  "data": "osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4", 2026-01-05 00:42:13.447957 | orchestrator |  "data_vg": "ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4" 2026-01-05 00:42:13.447968 | orchestrator |  } 2026-01-05 00:42:13.447978 | orchestrator |  ] 2026-01-05 00:42:13.447989 | orchestrator |  } 2026-01-05 00:42:13.448000 | orchestrator | } 2026-01-05 00:42:13.448010 | orchestrator | 2026-01-05 00:42:13.448021 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:42:13.448032 | orchestrator | Monday 05 January 2026 00:42:11 +0000 (0:00:00.166) 0:00:27.008 ******** 2026-01-05 00:42:13.448043 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:42:13.448054 | orchestrator | 2026-01-05 00:42:13.448064 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:42:13.448075 | orchestrator | 2026-01-05 00:42:13.448085 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:42:13.448096 | orchestrator | Monday 05 January 2026 00:42:12 +0000 (0:00:00.949) 0:00:27.957 ******** 2026-01-05 00:42:13.448107 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:42:13.448118 | orchestrator | 2026-01-05 00:42:13.448129 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:42:13.448154 | orchestrator | Monday 05 January 2026 00:42:12 +0000 (0:00:00.548) 0:00:28.505 ******** 2026-01-05 00:42:13.448165 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:42:13.448176 | orchestrator | 2026-01-05 00:42:13.448187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:13.448197 | orchestrator | Monday 05 January 2026 00:42:13 +0000 (0:00:00.240) 0:00:28.745 ******** 2026-01-05 00:42:13.448208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:42:13.448219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:42:13.448229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:42:13.448240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:42:13.448251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:42:13.448268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:42:21.093749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:42:21.093893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:42:21.093919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-05 00:42:21.093941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:42:21.093961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:42:21.093983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:42:21.094002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:42:21.094014 | orchestrator | 2026-01-05 00:42:21.094085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094098 | orchestrator | Monday 05 January 2026 00:42:13 +0000 (0:00:00.293) 0:00:29.038 ******** 2026-01-05 00:42:21.094110 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.094123 | orchestrator | 2026-01-05 00:42:21.094134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094145 | orchestrator | Monday 05 January 2026 00:42:13 +0000 (0:00:00.175) 0:00:29.214 ******** 2026-01-05 00:42:21.094156 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.094167 | orchestrator | 2026-01-05 00:42:21.094178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094189 | orchestrator | Monday 05 January 2026 00:42:13 +0000 (0:00:00.142) 0:00:29.356 ******** 2026-01-05 00:42:21.094200 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.094211 | orchestrator | 2026-01-05 00:42:21.094222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094233 | orchestrator | Monday 05 January 2026 00:42:13 +0000 (0:00:00.128) 0:00:29.485 ******** 2026-01-05 00:42:21.094244 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.094255 | orchestrator | 2026-01-05 00:42:21.094269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094282 | orchestrator | Monday 05 January 2026 00:42:14 +0000 (0:00:00.215) 0:00:29.700 ******** 2026-01-05 00:42:21.094294 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.094307 | orchestrator | 2026-01-05 00:42:21.094319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094332 | orchestrator | Monday 05 January 2026 00:42:14 +0000 (0:00:00.193) 0:00:29.893 ******** 2026-01-05 00:42:21.094344 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.094357 | orchestrator | 2026-01-05 00:42:21.094370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094413 | orchestrator | Monday 05 January 2026 00:42:14 +0000 (0:00:00.197) 0:00:30.090 ******** 2026-01-05 00:42:21.094426 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.094439 | orchestrator | 2026-01-05 00:42:21.094452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094464 | orchestrator | Monday 05 January 2026 00:42:14 +0000 (0:00:00.177) 0:00:30.268 ******** 2026-01-05 00:42:21.094478 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.094491 | orchestrator | 2026-01-05 00:42:21.094503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094517 | orchestrator | Monday 05 January 2026 00:42:14 +0000 (0:00:00.160) 0:00:30.428 ******** 2026-01-05 00:42:21.094530 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab) 2026-01-05 00:42:21.094570 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab) 2026-01-05 00:42:21.094583 | orchestrator | 2026-01-05 00:42:21.094596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094609 | orchestrator | Monday 05 January 2026 00:42:15 +0000 (0:00:00.710) 0:00:31.139 ******** 2026-01-05 00:42:21.094622 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421) 2026-01-05 00:42:21.094632 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421) 2026-01-05 00:42:21.094643 | orchestrator | 2026-01-05 00:42:21.094654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094665 | orchestrator | Monday 05 January 2026 00:42:15 +0000 (0:00:00.388) 0:00:31.527 ******** 2026-01-05 00:42:21.094676 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678) 2026-01-05 00:42:21.094687 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678) 2026-01-05 00:42:21.094698 | orchestrator | 2026-01-05 00:42:21.094708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094719 | orchestrator | Monday 05 January 2026 00:42:16 +0000 (0:00:00.411) 0:00:31.939 ******** 2026-01-05 00:42:21.094730 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a) 2026-01-05 00:42:21.094741 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a) 2026-01-05 00:42:21.094752 | orchestrator | 2026-01-05 00:42:21.094762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:21.094773 | orchestrator | Monday 05 January 2026 00:42:16 +0000 (0:00:00.407) 0:00:32.347 ******** 2026-01-05 00:42:21.094784 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:42:21.094795 | orchestrator | 2026-01-05 00:42:21.094806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.094837 | orchestrator | Monday 05 January 2026 00:42:17 +0000 (0:00:00.372) 0:00:32.719 ******** 2026-01-05 00:42:21.094849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:42:21.094860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:42:21.094870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:42:21.094881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:42:21.094891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:42:21.094921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:42:21.094938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:42:21.094958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:42:21.094989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-05 00:42:21.095007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:42:21.095026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:42:21.095044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:42:21.095063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:42:21.095080 | orchestrator | 2026-01-05 00:42:21.095098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095116 | orchestrator | Monday 05 January 2026 00:42:17 +0000 (0:00:00.365) 0:00:33.085 ******** 2026-01-05 00:42:21.095136 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095154 | orchestrator | 2026-01-05 00:42:21.095172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095191 | orchestrator | Monday 05 January 2026 00:42:17 +0000 (0:00:00.205) 0:00:33.291 ******** 2026-01-05 00:42:21.095209 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095227 | orchestrator | 2026-01-05 00:42:21.095244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095270 | orchestrator | Monday 05 January 2026 00:42:17 +0000 (0:00:00.184) 0:00:33.475 ******** 2026-01-05 00:42:21.095289 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095307 | orchestrator | 2026-01-05 00:42:21.095326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095344 | orchestrator | Monday 05 January 2026 00:42:18 +0000 (0:00:00.219) 0:00:33.695 ******** 2026-01-05 00:42:21.095361 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095377 | orchestrator | 2026-01-05 00:42:21.095394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095412 | orchestrator | Monday 05 January 2026 00:42:18 +0000 (0:00:00.186) 0:00:33.882 ******** 2026-01-05 00:42:21.095431 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095448 | orchestrator | 2026-01-05 00:42:21.095466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095483 | orchestrator | Monday 05 January 2026 00:42:18 +0000 (0:00:00.186) 0:00:34.068 ******** 2026-01-05 00:42:21.095500 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095517 | orchestrator | 2026-01-05 00:42:21.095534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095587 | orchestrator | Monday 05 January 2026 00:42:19 +0000 (0:00:00.659) 0:00:34.728 ******** 2026-01-05 00:42:21.095605 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095623 | orchestrator | 2026-01-05 00:42:21.095642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095661 | orchestrator | Monday 05 January 2026 00:42:19 +0000 (0:00:00.242) 0:00:34.971 ******** 2026-01-05 00:42:21.095677 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095695 | orchestrator | 2026-01-05 00:42:21.095712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095730 | orchestrator | Monday 05 January 2026 00:42:19 +0000 (0:00:00.201) 0:00:35.172 ******** 2026-01-05 00:42:21.095749 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-05 00:42:21.095769 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-05 00:42:21.095787 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-05 00:42:21.095805 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-05 00:42:21.095823 | orchestrator | 2026-01-05 00:42:21.095841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095858 | orchestrator | Monday 05 January 2026 00:42:20 +0000 (0:00:00.733) 0:00:35.906 ******** 2026-01-05 00:42:21.095876 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095912 | orchestrator | 2026-01-05 00:42:21.095931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.095950 | orchestrator | Monday 05 January 2026 00:42:20 +0000 (0:00:00.224) 0:00:36.131 ******** 2026-01-05 00:42:21.095969 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.095985 | orchestrator | 2026-01-05 00:42:21.096001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.096019 | orchestrator | Monday 05 January 2026 00:42:20 +0000 (0:00:00.221) 0:00:36.352 ******** 2026-01-05 00:42:21.096037 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.096055 | orchestrator | 2026-01-05 00:42:21.096073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:21.096090 | orchestrator | Monday 05 January 2026 00:42:20 +0000 (0:00:00.163) 0:00:36.516 ******** 2026-01-05 00:42:21.096109 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:21.096126 | orchestrator | 2026-01-05 00:42:21.096163 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:42:25.125200 | orchestrator | Monday 05 January 2026 00:42:21 +0000 (0:00:00.172) 0:00:36.689 ******** 2026-01-05 00:42:25.125300 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:42:25.125311 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:42:25.125321 | orchestrator | 2026-01-05 00:42:25.125329 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:42:25.125338 | orchestrator | Monday 05 January 2026 00:42:21 +0000 (0:00:00.149) 0:00:36.838 ******** 2026-01-05 00:42:25.125346 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.125354 | orchestrator | 2026-01-05 00:42:25.125361 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:42:25.125369 | orchestrator | Monday 05 January 2026 00:42:21 +0000 (0:00:00.119) 0:00:36.958 ******** 2026-01-05 00:42:25.125376 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.125384 | orchestrator | 2026-01-05 00:42:25.125392 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:42:25.125399 | orchestrator | Monday 05 January 2026 00:42:21 +0000 (0:00:00.149) 0:00:37.107 ******** 2026-01-05 00:42:25.125407 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.125414 | orchestrator | 2026-01-05 00:42:25.125422 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:42:25.125429 | orchestrator | Monday 05 January 2026 00:42:22 +0000 (0:00:00.738) 0:00:37.846 ******** 2026-01-05 00:42:25.125437 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:42:25.125445 | orchestrator | 2026-01-05 00:42:25.125454 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:42:25.125461 | orchestrator | Monday 05 January 2026 00:42:22 +0000 (0:00:00.157) 0:00:38.003 ******** 2026-01-05 00:42:25.125470 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1631feb6-d96c-5a43-89dd-a558edd73d68'}}) 2026-01-05 00:42:25.125478 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c322448e-6042-58d0-bdfa-5021630018c9'}}) 2026-01-05 00:42:25.125485 | orchestrator | 2026-01-05 00:42:25.125493 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:42:25.125500 | orchestrator | Monday 05 January 2026 00:42:22 +0000 (0:00:00.163) 0:00:38.167 ******** 2026-01-05 00:42:25.125509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1631feb6-d96c-5a43-89dd-a558edd73d68'}})  2026-01-05 00:42:25.125518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c322448e-6042-58d0-bdfa-5021630018c9'}})  2026-01-05 00:42:25.125526 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.125533 | orchestrator | 2026-01-05 00:42:25.125654 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:42:25.125662 | orchestrator | Monday 05 January 2026 00:42:22 +0000 (0:00:00.117) 0:00:38.284 ******** 2026-01-05 00:42:25.125779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1631feb6-d96c-5a43-89dd-a558edd73d68'}})  2026-01-05 00:42:25.125796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c322448e-6042-58d0-bdfa-5021630018c9'}})  2026-01-05 00:42:25.125809 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.125822 | orchestrator | 2026-01-05 00:42:25.125836 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:42:25.125849 | orchestrator | Monday 05 January 2026 00:42:22 +0000 (0:00:00.126) 0:00:38.411 ******** 2026-01-05 00:42:25.125887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1631feb6-d96c-5a43-89dd-a558edd73d68'}})  2026-01-05 00:42:25.125903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c322448e-6042-58d0-bdfa-5021630018c9'}})  2026-01-05 00:42:25.125918 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.125933 | orchestrator | 2026-01-05 00:42:25.125947 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:42:25.125961 | orchestrator | Monday 05 January 2026 00:42:22 +0000 (0:00:00.123) 0:00:38.535 ******** 2026-01-05 00:42:25.125975 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:42:25.125989 | orchestrator | 2026-01-05 00:42:25.126002 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:42:25.126083 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.116) 0:00:38.651 ******** 2026-01-05 00:42:25.126099 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:42:25.126112 | orchestrator | 2026-01-05 00:42:25.126126 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:42:25.126140 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.115) 0:00:38.766 ******** 2026-01-05 00:42:25.126153 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.126167 | orchestrator | 2026-01-05 00:42:25.126180 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:42:25.126192 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.107) 0:00:38.873 ******** 2026-01-05 00:42:25.126200 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.126207 | orchestrator | 2026-01-05 00:42:25.126214 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:42:25.126221 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.105) 0:00:38.979 ******** 2026-01-05 00:42:25.126229 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.126236 | orchestrator | 2026-01-05 00:42:25.126243 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:42:25.126250 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.106) 0:00:39.085 ******** 2026-01-05 00:42:25.126258 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:42:25.126265 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:42:25.126272 | orchestrator |  "sdb": { 2026-01-05 00:42:25.126299 | orchestrator |  "osd_lvm_uuid": "1631feb6-d96c-5a43-89dd-a558edd73d68" 2026-01-05 00:42:25.126308 | orchestrator |  }, 2026-01-05 00:42:25.126315 | orchestrator |  "sdc": { 2026-01-05 00:42:25.126322 | orchestrator |  "osd_lvm_uuid": "c322448e-6042-58d0-bdfa-5021630018c9" 2026-01-05 00:42:25.126330 | orchestrator |  } 2026-01-05 00:42:25.126337 | orchestrator |  } 2026-01-05 00:42:25.126345 | orchestrator | } 2026-01-05 00:42:25.126352 | orchestrator | 2026-01-05 00:42:25.126359 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:42:25.126367 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.113) 0:00:39.199 ******** 2026-01-05 00:42:25.126374 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.126381 | orchestrator | 2026-01-05 00:42:25.126388 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:42:25.126395 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.106) 0:00:39.306 ******** 2026-01-05 00:42:25.126415 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.126423 | orchestrator | 2026-01-05 00:42:25.126430 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:42:25.126437 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.251) 0:00:39.557 ******** 2026-01-05 00:42:25.126444 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:42:25.126451 | orchestrator | 2026-01-05 00:42:25.126458 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:42:25.126466 | orchestrator | Monday 05 January 2026 00:42:24 +0000 (0:00:00.109) 0:00:39.667 ******** 2026-01-05 00:42:25.126473 | orchestrator | changed: [testbed-node-5] => { 2026-01-05 00:42:25.126480 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:42:25.126487 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:42:25.126494 | orchestrator |  "sdb": { 2026-01-05 00:42:25.126502 | orchestrator |  "osd_lvm_uuid": "1631feb6-d96c-5a43-89dd-a558edd73d68" 2026-01-05 00:42:25.126509 | orchestrator |  }, 2026-01-05 00:42:25.126516 | orchestrator |  "sdc": { 2026-01-05 00:42:25.126523 | orchestrator |  "osd_lvm_uuid": "c322448e-6042-58d0-bdfa-5021630018c9" 2026-01-05 00:42:25.126531 | orchestrator |  } 2026-01-05 00:42:25.126561 | orchestrator |  }, 2026-01-05 00:42:25.126568 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:42:25.126575 | orchestrator |  { 2026-01-05 00:42:25.126583 | orchestrator |  "data": "osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68", 2026-01-05 00:42:25.126591 | orchestrator |  "data_vg": "ceph-1631feb6-d96c-5a43-89dd-a558edd73d68" 2026-01-05 00:42:25.126598 | orchestrator |  }, 2026-01-05 00:42:25.126605 | orchestrator |  { 2026-01-05 00:42:25.126613 | orchestrator |  "data": "osd-block-c322448e-6042-58d0-bdfa-5021630018c9", 2026-01-05 00:42:25.126623 | orchestrator |  "data_vg": "ceph-c322448e-6042-58d0-bdfa-5021630018c9" 2026-01-05 00:42:25.126634 | orchestrator |  } 2026-01-05 00:42:25.126650 | orchestrator |  ] 2026-01-05 00:42:25.126658 | orchestrator |  } 2026-01-05 00:42:25.126665 | orchestrator | } 2026-01-05 00:42:25.126672 | orchestrator | 2026-01-05 00:42:25.126679 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:42:25.126686 | orchestrator | Monday 05 January 2026 00:42:24 +0000 (0:00:00.176) 0:00:39.843 ******** 2026-01-05 00:42:25.126694 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:42:25.126701 | orchestrator | 2026-01-05 00:42:25.126708 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:42:25.126716 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:42:25.126725 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:42:25.126732 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:42:25.126739 | orchestrator | 2026-01-05 00:42:25.126746 | orchestrator | 2026-01-05 00:42:25.126753 | orchestrator | 2026-01-05 00:42:25.126761 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:42:25.126768 | orchestrator | Monday 05 January 2026 00:42:25 +0000 (0:00:00.868) 0:00:40.712 ******** 2026-01-05 00:42:25.126775 | orchestrator | =============================================================================== 2026-01-05 00:42:25.126782 | orchestrator | Write configuration file ------------------------------------------------ 3.55s 2026-01-05 00:42:25.126789 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2026-01-05 00:42:25.126796 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2026-01-05 00:42:25.126803 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2026-01-05 00:42:25.126836 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.01s 2026-01-05 00:42:25.126844 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 1.01s 2026-01-05 00:42:25.126851 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-01-05 00:42:25.126858 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-01-05 00:42:25.126865 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-01-05 00:42:25.126872 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-01-05 00:42:25.126879 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-01-05 00:42:25.126886 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-01-05 00:42:25.126893 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-01-05 00:42:25.126907 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-01-05 00:42:25.449317 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-01-05 00:42:25.449407 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-01-05 00:42:25.449416 | orchestrator | Print configuration data ------------------------------------------------ 0.66s 2026-01-05 00:42:25.449423 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-01-05 00:42:25.449430 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.55s 2026-01-05 00:42:25.449436 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.54s 2026-01-05 00:42:48.137168 | orchestrator | 2026-01-05 00:42:48 | INFO  | Task f61f6c3b-42b1-420f-a9ee-acb2f0db29a3 (sync inventory) is running in background. Output coming soon. 2026-01-05 00:43:16.659865 | orchestrator | 2026-01-05 00:42:49 | INFO  | Starting group_vars file reorganization 2026-01-05 00:43:16.660000 | orchestrator | 2026-01-05 00:42:49 | INFO  | Moved 0 file(s) to their respective directories 2026-01-05 00:43:16.660017 | orchestrator | 2026-01-05 00:42:49 | INFO  | Group_vars file reorganization completed 2026-01-05 00:43:16.660029 | orchestrator | 2026-01-05 00:42:52 | INFO  | Starting variable preparation from inventory 2026-01-05 00:43:16.660041 | orchestrator | 2026-01-05 00:42:55 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-05 00:43:16.660053 | orchestrator | 2026-01-05 00:42:55 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-05 00:43:16.660085 | orchestrator | 2026-01-05 00:42:56 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-05 00:43:16.660097 | orchestrator | 2026-01-05 00:42:56 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-05 00:43:16.660109 | orchestrator | 2026-01-05 00:42:56 | INFO  | Variable preparation completed 2026-01-05 00:43:16.660120 | orchestrator | 2026-01-05 00:42:57 | INFO  | Starting inventory overwrite handling 2026-01-05 00:43:16.660136 | orchestrator | 2026-01-05 00:42:57 | INFO  | Handling group overwrites in 99-overwrite 2026-01-05 00:43:16.660148 | orchestrator | 2026-01-05 00:42:57 | INFO  | Removing group frr:children from 60-generic 2026-01-05 00:43:16.660159 | orchestrator | 2026-01-05 00:42:57 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-05 00:43:16.660170 | orchestrator | 2026-01-05 00:42:57 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-05 00:43:16.660181 | orchestrator | 2026-01-05 00:42:57 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-05 00:43:16.660192 | orchestrator | 2026-01-05 00:42:57 | INFO  | Handling group overwrites in 20-roles 2026-01-05 00:43:16.660227 | orchestrator | 2026-01-05 00:42:57 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-05 00:43:16.660239 | orchestrator | 2026-01-05 00:42:57 | INFO  | Removed 5 group(s) in total 2026-01-05 00:43:16.660250 | orchestrator | 2026-01-05 00:42:57 | INFO  | Inventory overwrite handling completed 2026-01-05 00:43:16.660261 | orchestrator | 2026-01-05 00:42:58 | INFO  | Starting merge of inventory files 2026-01-05 00:43:16.660271 | orchestrator | 2026-01-05 00:42:58 | INFO  | Inventory files merged successfully 2026-01-05 00:43:16.660282 | orchestrator | 2026-01-05 00:43:04 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-05 00:43:16.660293 | orchestrator | 2026-01-05 00:43:15 | INFO  | Successfully wrote ClusterShell configuration 2026-01-05 00:43:16.660304 | orchestrator | [master ab426f4] 2026-01-05-00-43 2026-01-05 00:43:16.660316 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-05 00:43:19.157583 | orchestrator | 2026-01-05 00:43:19 | INFO  | Task 16e25e86-d649-4746-a8ee-754b548a01c2 (ceph-create-lvm-devices) was prepared for execution. 2026-01-05 00:43:19.157699 | orchestrator | 2026-01-05 00:43:19 | INFO  | It takes a moment until task 16e25e86-d649-4746-a8ee-754b548a01c2 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-05 00:43:31.039746 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:43:31.039878 | orchestrator | 2.16.14 2026-01-05 00:43:31.039898 | orchestrator | 2026-01-05 00:43:31.039911 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:43:31.039923 | orchestrator | 2026-01-05 00:43:31.039935 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:43:31.039946 | orchestrator | Monday 05 January 2026 00:43:23 +0000 (0:00:00.312) 0:00:00.312 ******** 2026-01-05 00:43:31.039958 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:43:31.039969 | orchestrator | 2026-01-05 00:43:31.039980 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:43:31.039991 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.228) 0:00:00.541 ******** 2026-01-05 00:43:31.040002 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:31.040013 | orchestrator | 2026-01-05 00:43:31.040025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040036 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.225) 0:00:00.766 ******** 2026-01-05 00:43:31.040047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:43:31.040058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:43:31.040069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:43:31.040080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:43:31.040091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:43:31.040102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:43:31.040112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:43:31.040123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:43:31.040134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-05 00:43:31.040145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:43:31.040156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:43:31.040167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:43:31.040208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:43:31.040220 | orchestrator | 2026-01-05 00:43:31.040230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040241 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.532) 0:00:01.299 ******** 2026-01-05 00:43:31.040255 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.040268 | orchestrator | 2026-01-05 00:43:31.040280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040293 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.208) 0:00:01.507 ******** 2026-01-05 00:43:31.040307 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.040319 | orchestrator | 2026-01-05 00:43:31.040332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040346 | orchestrator | Monday 05 January 2026 00:43:25 +0000 (0:00:00.286) 0:00:01.793 ******** 2026-01-05 00:43:31.040359 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.040371 | orchestrator | 2026-01-05 00:43:31.040384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040397 | orchestrator | Monday 05 January 2026 00:43:25 +0000 (0:00:00.233) 0:00:02.027 ******** 2026-01-05 00:43:31.040411 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.040440 | orchestrator | 2026-01-05 00:43:31.040464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040478 | orchestrator | Monday 05 January 2026 00:43:25 +0000 (0:00:00.213) 0:00:02.241 ******** 2026-01-05 00:43:31.040491 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.040504 | orchestrator | 2026-01-05 00:43:31.040534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040547 | orchestrator | Monday 05 January 2026 00:43:25 +0000 (0:00:00.190) 0:00:02.432 ******** 2026-01-05 00:43:31.040560 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.040573 | orchestrator | 2026-01-05 00:43:31.040587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040600 | orchestrator | Monday 05 January 2026 00:43:26 +0000 (0:00:00.238) 0:00:02.670 ******** 2026-01-05 00:43:31.040612 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.040623 | orchestrator | 2026-01-05 00:43:31.040634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040645 | orchestrator | Monday 05 January 2026 00:43:26 +0000 (0:00:00.231) 0:00:02.901 ******** 2026-01-05 00:43:31.040655 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.040666 | orchestrator | 2026-01-05 00:43:31.040677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040688 | orchestrator | Monday 05 January 2026 00:43:26 +0000 (0:00:00.246) 0:00:03.148 ******** 2026-01-05 00:43:31.040699 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c) 2026-01-05 00:43:31.040712 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c) 2026-01-05 00:43:31.040723 | orchestrator | 2026-01-05 00:43:31.040734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040764 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.446) 0:00:03.595 ******** 2026-01-05 00:43:31.040776 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20) 2026-01-05 00:43:31.040787 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20) 2026-01-05 00:43:31.040798 | orchestrator | 2026-01-05 00:43:31.040809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040820 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.664) 0:00:04.260 ******** 2026-01-05 00:43:31.040831 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2) 2026-01-05 00:43:31.040850 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2) 2026-01-05 00:43:31.040861 | orchestrator | 2026-01-05 00:43:31.040872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040883 | orchestrator | Monday 05 January 2026 00:43:28 +0000 (0:00:00.569) 0:00:04.829 ******** 2026-01-05 00:43:31.040894 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8) 2026-01-05 00:43:31.040905 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8) 2026-01-05 00:43:31.040916 | orchestrator | 2026-01-05 00:43:31.040927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:31.040938 | orchestrator | Monday 05 January 2026 00:43:28 +0000 (0:00:00.699) 0:00:05.528 ******** 2026-01-05 00:43:31.040949 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:43:31.040959 | orchestrator | 2026-01-05 00:43:31.040970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:31.040981 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.317) 0:00:05.846 ******** 2026-01-05 00:43:31.040992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:43:31.041004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:43:31.041014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:43:31.041047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:43:31.041059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:43:31.041070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:43:31.041080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:43:31.041091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:43:31.041102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-05 00:43:31.041112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:43:31.041123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:43:31.041139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:43:31.041150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:43:31.041161 | orchestrator | 2026-01-05 00:43:31.041172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:31.041183 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.387) 0:00:06.234 ******** 2026-01-05 00:43:31.041194 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.041205 | orchestrator | 2026-01-05 00:43:31.041216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:31.041227 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.216) 0:00:06.450 ******** 2026-01-05 00:43:31.041238 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.041248 | orchestrator | 2026-01-05 00:43:31.041259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:31.041270 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.172) 0:00:06.623 ******** 2026-01-05 00:43:31.041281 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.041292 | orchestrator | 2026-01-05 00:43:31.041302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:31.041313 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.186) 0:00:06.810 ******** 2026-01-05 00:43:31.041324 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.041339 | orchestrator | 2026-01-05 00:43:31.041351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:31.041361 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.185) 0:00:06.995 ******** 2026-01-05 00:43:31.041372 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.041383 | orchestrator | 2026-01-05 00:43:31.041394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:31.041405 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.182) 0:00:07.177 ******** 2026-01-05 00:43:31.041415 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.041426 | orchestrator | 2026-01-05 00:43:31.041437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:31.041448 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.195) 0:00:07.373 ******** 2026-01-05 00:43:31.041459 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:31.041470 | orchestrator | 2026-01-05 00:43:31.041486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:39.782710 | orchestrator | Monday 05 January 2026 00:43:31 +0000 (0:00:00.196) 0:00:07.569 ******** 2026-01-05 00:43:39.782807 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.782815 | orchestrator | 2026-01-05 00:43:39.782822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:39.782828 | orchestrator | Monday 05 January 2026 00:43:31 +0000 (0:00:00.234) 0:00:07.804 ******** 2026-01-05 00:43:39.782834 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-05 00:43:39.782840 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-05 00:43:39.782846 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-05 00:43:39.782852 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-05 00:43:39.782857 | orchestrator | 2026-01-05 00:43:39.782863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:39.782869 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:01.126) 0:00:08.931 ******** 2026-01-05 00:43:39.782874 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.782879 | orchestrator | 2026-01-05 00:43:39.782885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:39.782890 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.197) 0:00:09.128 ******** 2026-01-05 00:43:39.782895 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.782900 | orchestrator | 2026-01-05 00:43:39.782905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:39.782911 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.317) 0:00:09.445 ******** 2026-01-05 00:43:39.782916 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.782921 | orchestrator | 2026-01-05 00:43:39.782926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:39.782931 | orchestrator | Monday 05 January 2026 00:43:33 +0000 (0:00:00.254) 0:00:09.700 ******** 2026-01-05 00:43:39.782937 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.782942 | orchestrator | 2026-01-05 00:43:39.782947 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:43:39.782952 | orchestrator | Monday 05 January 2026 00:43:33 +0000 (0:00:00.243) 0:00:09.944 ******** 2026-01-05 00:43:39.782957 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.782962 | orchestrator | 2026-01-05 00:43:39.782967 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:43:39.782972 | orchestrator | Monday 05 January 2026 00:43:33 +0000 (0:00:00.134) 0:00:10.078 ******** 2026-01-05 00:43:39.782978 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}}) 2026-01-05 00:43:39.782984 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0807b7d-156a-51e9-a1ef-1ae613918df1'}}) 2026-01-05 00:43:39.782989 | orchestrator | 2026-01-05 00:43:39.782994 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:43:39.783017 | orchestrator | Monday 05 January 2026 00:43:33 +0000 (0:00:00.192) 0:00:10.270 ******** 2026-01-05 00:43:39.783024 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}) 2026-01-05 00:43:39.783029 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'}) 2026-01-05 00:43:39.783034 | orchestrator | 2026-01-05 00:43:39.783040 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:43:39.783045 | orchestrator | Monday 05 January 2026 00:43:35 +0000 (0:00:02.177) 0:00:12.447 ******** 2026-01-05 00:43:39.783050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:39.783057 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:39.783062 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783067 | orchestrator | 2026-01-05 00:43:39.783072 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:43:39.783077 | orchestrator | Monday 05 January 2026 00:43:36 +0000 (0:00:00.188) 0:00:12.636 ******** 2026-01-05 00:43:39.783082 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}) 2026-01-05 00:43:39.783087 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'}) 2026-01-05 00:43:39.783092 | orchestrator | 2026-01-05 00:43:39.783098 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:43:39.783103 | orchestrator | Monday 05 January 2026 00:43:37 +0000 (0:00:01.507) 0:00:14.143 ******** 2026-01-05 00:43:39.783108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:39.783113 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:39.783119 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783124 | orchestrator | 2026-01-05 00:43:39.783129 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:43:39.783134 | orchestrator | Monday 05 January 2026 00:43:37 +0000 (0:00:00.155) 0:00:14.298 ******** 2026-01-05 00:43:39.783150 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783156 | orchestrator | 2026-01-05 00:43:39.783161 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:43:39.783166 | orchestrator | Monday 05 January 2026 00:43:37 +0000 (0:00:00.140) 0:00:14.439 ******** 2026-01-05 00:43:39.783171 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:39.783177 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:39.783182 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783187 | orchestrator | 2026-01-05 00:43:39.783192 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:43:39.783197 | orchestrator | Monday 05 January 2026 00:43:38 +0000 (0:00:00.318) 0:00:14.757 ******** 2026-01-05 00:43:39.783202 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783207 | orchestrator | 2026-01-05 00:43:39.783212 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:43:39.783218 | orchestrator | Monday 05 January 2026 00:43:38 +0000 (0:00:00.139) 0:00:14.897 ******** 2026-01-05 00:43:39.783229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:39.783234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:39.783239 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783244 | orchestrator | 2026-01-05 00:43:39.783249 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:43:39.783255 | orchestrator | Monday 05 January 2026 00:43:38 +0000 (0:00:00.159) 0:00:15.056 ******** 2026-01-05 00:43:39.783260 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783266 | orchestrator | 2026-01-05 00:43:39.783272 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:43:39.783278 | orchestrator | Monday 05 January 2026 00:43:38 +0000 (0:00:00.144) 0:00:15.201 ******** 2026-01-05 00:43:39.783284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:39.783290 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:39.783296 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783301 | orchestrator | 2026-01-05 00:43:39.783308 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:43:39.783314 | orchestrator | Monday 05 January 2026 00:43:38 +0000 (0:00:00.205) 0:00:15.407 ******** 2026-01-05 00:43:39.783320 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:39.783326 | orchestrator | 2026-01-05 00:43:39.783332 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:43:39.783354 | orchestrator | Monday 05 January 2026 00:43:39 +0000 (0:00:00.165) 0:00:15.572 ******** 2026-01-05 00:43:39.783363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:39.783369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:39.783376 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783381 | orchestrator | 2026-01-05 00:43:39.783387 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:43:39.783393 | orchestrator | Monday 05 January 2026 00:43:39 +0000 (0:00:00.195) 0:00:15.768 ******** 2026-01-05 00:43:39.783399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:39.783405 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:39.783410 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783416 | orchestrator | 2026-01-05 00:43:39.783423 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:43:39.783429 | orchestrator | Monday 05 January 2026 00:43:39 +0000 (0:00:00.170) 0:00:15.939 ******** 2026-01-05 00:43:39.783435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:39.783440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:39.783446 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783452 | orchestrator | 2026-01-05 00:43:39.783458 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:43:39.783468 | orchestrator | Monday 05 January 2026 00:43:39 +0000 (0:00:00.208) 0:00:16.147 ******** 2026-01-05 00:43:39.783474 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:39.783480 | orchestrator | 2026-01-05 00:43:39.783486 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:43:39.783496 | orchestrator | Monday 05 January 2026 00:43:39 +0000 (0:00:00.162) 0:00:16.310 ******** 2026-01-05 00:43:47.009806 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.009959 | orchestrator | 2026-01-05 00:43:47.009988 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:43:47.010009 | orchestrator | Monday 05 January 2026 00:43:39 +0000 (0:00:00.153) 0:00:16.463 ******** 2026-01-05 00:43:47.010088 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.010108 | orchestrator | 2026-01-05 00:43:47.010128 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:43:47.010148 | orchestrator | Monday 05 January 2026 00:43:40 +0000 (0:00:00.180) 0:00:16.644 ******** 2026-01-05 00:43:47.010167 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:43:47.010188 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:43:47.010209 | orchestrator | } 2026-01-05 00:43:47.010230 | orchestrator | 2026-01-05 00:43:47.010243 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:43:47.010254 | orchestrator | Monday 05 January 2026 00:43:40 +0000 (0:00:00.454) 0:00:17.099 ******** 2026-01-05 00:43:47.010265 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:43:47.010277 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:43:47.010289 | orchestrator | } 2026-01-05 00:43:47.010300 | orchestrator | 2026-01-05 00:43:47.010311 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:43:47.010322 | orchestrator | Monday 05 January 2026 00:43:40 +0000 (0:00:00.163) 0:00:17.262 ******** 2026-01-05 00:43:47.010335 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:43:47.010346 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:43:47.010357 | orchestrator | } 2026-01-05 00:43:47.010368 | orchestrator | 2026-01-05 00:43:47.010379 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:43:47.010390 | orchestrator | Monday 05 January 2026 00:43:40 +0000 (0:00:00.145) 0:00:17.408 ******** 2026-01-05 00:43:47.010401 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:47.010412 | orchestrator | 2026-01-05 00:43:47.010423 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:43:47.010434 | orchestrator | Monday 05 January 2026 00:43:41 +0000 (0:00:00.727) 0:00:18.135 ******** 2026-01-05 00:43:47.010445 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:47.010456 | orchestrator | 2026-01-05 00:43:47.010466 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:43:47.010477 | orchestrator | Monday 05 January 2026 00:43:42 +0000 (0:00:00.530) 0:00:18.666 ******** 2026-01-05 00:43:47.010488 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:47.010536 | orchestrator | 2026-01-05 00:43:47.010548 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:43:47.010560 | orchestrator | Monday 05 January 2026 00:43:42 +0000 (0:00:00.551) 0:00:19.217 ******** 2026-01-05 00:43:47.010571 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:47.010582 | orchestrator | 2026-01-05 00:43:47.010593 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:43:47.010604 | orchestrator | Monday 05 January 2026 00:43:42 +0000 (0:00:00.188) 0:00:19.405 ******** 2026-01-05 00:43:47.010615 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.010627 | orchestrator | 2026-01-05 00:43:47.010638 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:43:47.010649 | orchestrator | Monday 05 January 2026 00:43:42 +0000 (0:00:00.117) 0:00:19.522 ******** 2026-01-05 00:43:47.010660 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.010671 | orchestrator | 2026-01-05 00:43:47.010682 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:43:47.010737 | orchestrator | Monday 05 January 2026 00:43:43 +0000 (0:00:00.151) 0:00:19.674 ******** 2026-01-05 00:43:47.010749 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:43:47.010761 | orchestrator |  "vgs_report": { 2026-01-05 00:43:47.010772 | orchestrator |  "vg": [] 2026-01-05 00:43:47.010783 | orchestrator |  } 2026-01-05 00:43:47.010794 | orchestrator | } 2026-01-05 00:43:47.010805 | orchestrator | 2026-01-05 00:43:47.010816 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:43:47.010826 | orchestrator | Monday 05 January 2026 00:43:43 +0000 (0:00:00.152) 0:00:19.827 ******** 2026-01-05 00:43:47.010837 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.010848 | orchestrator | 2026-01-05 00:43:47.010859 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:43:47.010870 | orchestrator | Monday 05 January 2026 00:43:43 +0000 (0:00:00.165) 0:00:19.992 ******** 2026-01-05 00:43:47.010881 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.010891 | orchestrator | 2026-01-05 00:43:47.010902 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:43:47.010913 | orchestrator | Monday 05 January 2026 00:43:43 +0000 (0:00:00.172) 0:00:20.164 ******** 2026-01-05 00:43:47.010924 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.010935 | orchestrator | 2026-01-05 00:43:47.010946 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:43:47.010956 | orchestrator | Monday 05 January 2026 00:43:44 +0000 (0:00:00.407) 0:00:20.572 ******** 2026-01-05 00:43:47.010967 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.010978 | orchestrator | 2026-01-05 00:43:47.010989 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:43:47.011000 | orchestrator | Monday 05 January 2026 00:43:44 +0000 (0:00:00.161) 0:00:20.734 ******** 2026-01-05 00:43:47.011011 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011022 | orchestrator | 2026-01-05 00:43:47.011033 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:43:47.011043 | orchestrator | Monday 05 January 2026 00:43:44 +0000 (0:00:00.168) 0:00:20.903 ******** 2026-01-05 00:43:47.011054 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011065 | orchestrator | 2026-01-05 00:43:47.011076 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:43:47.011086 | orchestrator | Monday 05 January 2026 00:43:44 +0000 (0:00:00.177) 0:00:21.080 ******** 2026-01-05 00:43:47.011097 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011108 | orchestrator | 2026-01-05 00:43:47.011119 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:43:47.011129 | orchestrator | Monday 05 January 2026 00:43:44 +0000 (0:00:00.164) 0:00:21.244 ******** 2026-01-05 00:43:47.011162 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011174 | orchestrator | 2026-01-05 00:43:47.011185 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:43:47.011196 | orchestrator | Monday 05 January 2026 00:43:44 +0000 (0:00:00.159) 0:00:21.404 ******** 2026-01-05 00:43:47.011207 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011221 | orchestrator | 2026-01-05 00:43:47.011240 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:43:47.011257 | orchestrator | Monday 05 January 2026 00:43:45 +0000 (0:00:00.166) 0:00:21.571 ******** 2026-01-05 00:43:47.011275 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011292 | orchestrator | 2026-01-05 00:43:47.011311 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:43:47.011330 | orchestrator | Monday 05 January 2026 00:43:45 +0000 (0:00:00.134) 0:00:21.706 ******** 2026-01-05 00:43:47.011348 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011365 | orchestrator | 2026-01-05 00:43:47.011385 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:43:47.011403 | orchestrator | Monday 05 January 2026 00:43:45 +0000 (0:00:00.131) 0:00:21.838 ******** 2026-01-05 00:43:47.011428 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011439 | orchestrator | 2026-01-05 00:43:47.011450 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:43:47.011461 | orchestrator | Monday 05 January 2026 00:43:45 +0000 (0:00:00.143) 0:00:21.981 ******** 2026-01-05 00:43:47.011471 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011482 | orchestrator | 2026-01-05 00:43:47.011528 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:43:47.011541 | orchestrator | Monday 05 January 2026 00:43:45 +0000 (0:00:00.154) 0:00:22.136 ******** 2026-01-05 00:43:47.011552 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011563 | orchestrator | 2026-01-05 00:43:47.011574 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:43:47.011584 | orchestrator | Monday 05 January 2026 00:43:45 +0000 (0:00:00.137) 0:00:22.274 ******** 2026-01-05 00:43:47.011597 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:47.011610 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:47.011621 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011631 | orchestrator | 2026-01-05 00:43:47.011642 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:43:47.011653 | orchestrator | Monday 05 January 2026 00:43:46 +0000 (0:00:00.394) 0:00:22.668 ******** 2026-01-05 00:43:47.011664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:47.011675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:47.011686 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011697 | orchestrator | 2026-01-05 00:43:47.011708 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:43:47.011719 | orchestrator | Monday 05 January 2026 00:43:46 +0000 (0:00:00.207) 0:00:22.876 ******** 2026-01-05 00:43:47.011730 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:47.011741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:47.011751 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011762 | orchestrator | 2026-01-05 00:43:47.011773 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:43:47.011784 | orchestrator | Monday 05 January 2026 00:43:46 +0000 (0:00:00.173) 0:00:23.050 ******** 2026-01-05 00:43:47.011795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:47.011806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:47.011817 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011827 | orchestrator | 2026-01-05 00:43:47.011838 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:43:47.011849 | orchestrator | Monday 05 January 2026 00:43:46 +0000 (0:00:00.161) 0:00:23.211 ******** 2026-01-05 00:43:47.011860 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:47.011871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:47.011889 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:47.011899 | orchestrator | 2026-01-05 00:43:47.011910 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:43:47.011931 | orchestrator | Monday 05 January 2026 00:43:46 +0000 (0:00:00.170) 0:00:23.382 ******** 2026-01-05 00:43:47.011950 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:52.699804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:52.699914 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:52.699925 | orchestrator | 2026-01-05 00:43:52.699933 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:43:52.699941 | orchestrator | Monday 05 January 2026 00:43:47 +0000 (0:00:00.159) 0:00:23.542 ******** 2026-01-05 00:43:52.699949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:52.699956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:52.699963 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:52.699970 | orchestrator | 2026-01-05 00:43:52.699977 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:43:52.699984 | orchestrator | Monday 05 January 2026 00:43:47 +0000 (0:00:00.160) 0:00:23.702 ******** 2026-01-05 00:43:52.699991 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:52.699998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:52.700005 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:52.700012 | orchestrator | 2026-01-05 00:43:52.700018 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:43:52.700025 | orchestrator | Monday 05 January 2026 00:43:47 +0000 (0:00:00.151) 0:00:23.853 ******** 2026-01-05 00:43:52.700032 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:52.700040 | orchestrator | 2026-01-05 00:43:52.700046 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:43:52.700053 | orchestrator | Monday 05 January 2026 00:43:47 +0000 (0:00:00.566) 0:00:24.420 ******** 2026-01-05 00:43:52.700060 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:52.700067 | orchestrator | 2026-01-05 00:43:52.700073 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:43:52.700080 | orchestrator | Monday 05 January 2026 00:43:48 +0000 (0:00:00.499) 0:00:24.920 ******** 2026-01-05 00:43:52.700087 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:52.700093 | orchestrator | 2026-01-05 00:43:52.700100 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:43:52.700107 | orchestrator | Monday 05 January 2026 00:43:48 +0000 (0:00:00.163) 0:00:25.083 ******** 2026-01-05 00:43:52.700114 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'vg_name': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}) 2026-01-05 00:43:52.700137 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'vg_name': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'}) 2026-01-05 00:43:52.700144 | orchestrator | 2026-01-05 00:43:52.700151 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:43:52.700158 | orchestrator | Monday 05 January 2026 00:43:48 +0000 (0:00:00.196) 0:00:25.280 ******** 2026-01-05 00:43:52.700183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:52.700190 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:52.700197 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:52.700204 | orchestrator | 2026-01-05 00:43:52.700210 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:43:52.700217 | orchestrator | Monday 05 January 2026 00:43:49 +0000 (0:00:00.383) 0:00:25.663 ******** 2026-01-05 00:43:52.700224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:52.700230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:52.700237 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:52.700244 | orchestrator | 2026-01-05 00:43:52.700251 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:43:52.700257 | orchestrator | Monday 05 January 2026 00:43:49 +0000 (0:00:00.175) 0:00:25.839 ******** 2026-01-05 00:43:52.700264 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'})  2026-01-05 00:43:52.700271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'})  2026-01-05 00:43:52.700277 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:52.700284 | orchestrator | 2026-01-05 00:43:52.700291 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:43:52.700297 | orchestrator | Monday 05 January 2026 00:43:49 +0000 (0:00:00.171) 0:00:26.011 ******** 2026-01-05 00:43:52.700321 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:43:52.700327 | orchestrator |  "lvm_report": { 2026-01-05 00:43:52.700335 | orchestrator |  "lv": [ 2026-01-05 00:43:52.700343 | orchestrator |  { 2026-01-05 00:43:52.700351 | orchestrator |  "lv_name": "osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c", 2026-01-05 00:43:52.700359 | orchestrator |  "vg_name": "ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c" 2026-01-05 00:43:52.700367 | orchestrator |  }, 2026-01-05 00:43:52.700374 | orchestrator |  { 2026-01-05 00:43:52.700382 | orchestrator |  "lv_name": "osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1", 2026-01-05 00:43:52.700390 | orchestrator |  "vg_name": "ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1" 2026-01-05 00:43:52.700397 | orchestrator |  } 2026-01-05 00:43:52.700405 | orchestrator |  ], 2026-01-05 00:43:52.700412 | orchestrator |  "pv": [ 2026-01-05 00:43:52.700419 | orchestrator |  { 2026-01-05 00:43:52.700427 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:43:52.700434 | orchestrator |  "vg_name": "ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c" 2026-01-05 00:43:52.700441 | orchestrator |  }, 2026-01-05 00:43:52.700449 | orchestrator |  { 2026-01-05 00:43:52.700456 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:43:52.700464 | orchestrator |  "vg_name": "ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1" 2026-01-05 00:43:52.700471 | orchestrator |  } 2026-01-05 00:43:52.700479 | orchestrator |  ] 2026-01-05 00:43:52.700526 | orchestrator |  } 2026-01-05 00:43:52.700535 | orchestrator | } 2026-01-05 00:43:52.700542 | orchestrator | 2026-01-05 00:43:52.700549 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:43:52.700557 | orchestrator | 2026-01-05 00:43:52.700563 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:43:52.700575 | orchestrator | Monday 05 January 2026 00:43:49 +0000 (0:00:00.318) 0:00:26.329 ******** 2026-01-05 00:43:52.700581 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:43:52.700588 | orchestrator | 2026-01-05 00:43:52.700594 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:43:52.700600 | orchestrator | Monday 05 January 2026 00:43:50 +0000 (0:00:00.278) 0:00:26.607 ******** 2026-01-05 00:43:52.700606 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:52.700613 | orchestrator | 2026-01-05 00:43:52.700619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:52.700625 | orchestrator | Monday 05 January 2026 00:43:50 +0000 (0:00:00.324) 0:00:26.932 ******** 2026-01-05 00:43:52.700631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:43:52.700637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:43:52.700643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:43:52.700649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:43:52.700655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:43:52.700662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:43:52.700672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:43:52.700678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:43:52.700684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-05 00:43:52.700690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:43:52.700697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:43:52.700703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:43:52.700709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:43:52.700715 | orchestrator | 2026-01-05 00:43:52.700721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:52.700727 | orchestrator | Monday 05 January 2026 00:43:50 +0000 (0:00:00.528) 0:00:27.461 ******** 2026-01-05 00:43:52.700733 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:52.700739 | orchestrator | 2026-01-05 00:43:52.700745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:52.700751 | orchestrator | Monday 05 January 2026 00:43:51 +0000 (0:00:00.197) 0:00:27.658 ******** 2026-01-05 00:43:52.700757 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:52.700764 | orchestrator | 2026-01-05 00:43:52.700770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:52.700776 | orchestrator | Monday 05 January 2026 00:43:51 +0000 (0:00:00.189) 0:00:27.848 ******** 2026-01-05 00:43:52.700782 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:52.700788 | orchestrator | 2026-01-05 00:43:52.700794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:52.700800 | orchestrator | Monday 05 January 2026 00:43:51 +0000 (0:00:00.662) 0:00:28.510 ******** 2026-01-05 00:43:52.700806 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:52.700812 | orchestrator | 2026-01-05 00:43:52.700818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:52.700824 | orchestrator | Monday 05 January 2026 00:43:52 +0000 (0:00:00.204) 0:00:28.715 ******** 2026-01-05 00:43:52.700830 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:52.700836 | orchestrator | 2026-01-05 00:43:52.700842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:52.700855 | orchestrator | Monday 05 January 2026 00:43:52 +0000 (0:00:00.257) 0:00:28.973 ******** 2026-01-05 00:43:52.700861 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:52.700867 | orchestrator | 2026-01-05 00:43:52.700878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:04.435555 | orchestrator | Monday 05 January 2026 00:43:52 +0000 (0:00:00.256) 0:00:29.229 ******** 2026-01-05 00:44:04.435653 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.435662 | orchestrator | 2026-01-05 00:44:04.435668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:04.435673 | orchestrator | Monday 05 January 2026 00:43:52 +0000 (0:00:00.221) 0:00:29.451 ******** 2026-01-05 00:44:04.435678 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.435682 | orchestrator | 2026-01-05 00:44:04.435686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:04.435690 | orchestrator | Monday 05 January 2026 00:43:53 +0000 (0:00:00.197) 0:00:29.648 ******** 2026-01-05 00:44:04.435695 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57) 2026-01-05 00:44:04.435701 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57) 2026-01-05 00:44:04.435705 | orchestrator | 2026-01-05 00:44:04.435709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:04.435713 | orchestrator | Monday 05 January 2026 00:43:53 +0000 (0:00:00.453) 0:00:30.102 ******** 2026-01-05 00:44:04.435717 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1) 2026-01-05 00:44:04.435721 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1) 2026-01-05 00:44:04.435732 | orchestrator | 2026-01-05 00:44:04.435735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:04.435739 | orchestrator | Monday 05 January 2026 00:43:54 +0000 (0:00:00.515) 0:00:30.617 ******** 2026-01-05 00:44:04.435743 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613) 2026-01-05 00:44:04.435747 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613) 2026-01-05 00:44:04.435751 | orchestrator | 2026-01-05 00:44:04.435755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:04.435759 | orchestrator | Monday 05 January 2026 00:43:54 +0000 (0:00:00.471) 0:00:31.088 ******** 2026-01-05 00:44:04.435763 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763) 2026-01-05 00:44:04.435767 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763) 2026-01-05 00:44:04.435771 | orchestrator | 2026-01-05 00:44:04.435775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:04.435779 | orchestrator | Monday 05 January 2026 00:43:55 +0000 (0:00:00.692) 0:00:31.781 ******** 2026-01-05 00:44:04.435783 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:44:04.435787 | orchestrator | 2026-01-05 00:44:04.435791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.435795 | orchestrator | Monday 05 January 2026 00:43:55 +0000 (0:00:00.693) 0:00:32.474 ******** 2026-01-05 00:44:04.435799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:44:04.435804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:44:04.435808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:44:04.435812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:44:04.435816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:44:04.435854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:44:04.435858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:44:04.435862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:44:04.435866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-05 00:44:04.435870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:44:04.435874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:44:04.435878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:44:04.435882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:44:04.435886 | orchestrator | 2026-01-05 00:44:04.435890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.435894 | orchestrator | Monday 05 January 2026 00:43:56 +0000 (0:00:00.643) 0:00:33.117 ******** 2026-01-05 00:44:04.435898 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.435901 | orchestrator | 2026-01-05 00:44:04.435906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.435910 | orchestrator | Monday 05 January 2026 00:43:56 +0000 (0:00:00.214) 0:00:33.332 ******** 2026-01-05 00:44:04.435914 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.435918 | orchestrator | 2026-01-05 00:44:04.435921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.435925 | orchestrator | Monday 05 January 2026 00:43:57 +0000 (0:00:00.207) 0:00:33.540 ******** 2026-01-05 00:44:04.435929 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.435933 | orchestrator | 2026-01-05 00:44:04.435948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.435953 | orchestrator | Monday 05 January 2026 00:43:57 +0000 (0:00:00.213) 0:00:33.753 ******** 2026-01-05 00:44:04.435956 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.435960 | orchestrator | 2026-01-05 00:44:04.435964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.435968 | orchestrator | Monday 05 January 2026 00:43:57 +0000 (0:00:00.218) 0:00:33.972 ******** 2026-01-05 00:44:04.435972 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.435976 | orchestrator | 2026-01-05 00:44:04.435979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.435983 | orchestrator | Monday 05 January 2026 00:43:57 +0000 (0:00:00.211) 0:00:34.183 ******** 2026-01-05 00:44:04.435987 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.435991 | orchestrator | 2026-01-05 00:44:04.435995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.435999 | orchestrator | Monday 05 January 2026 00:43:57 +0000 (0:00:00.231) 0:00:34.415 ******** 2026-01-05 00:44:04.436002 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.436006 | orchestrator | 2026-01-05 00:44:04.436010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.436014 | orchestrator | Monday 05 January 2026 00:43:58 +0000 (0:00:00.239) 0:00:34.654 ******** 2026-01-05 00:44:04.436018 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.436021 | orchestrator | 2026-01-05 00:44:04.436025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.436029 | orchestrator | Monday 05 January 2026 00:43:58 +0000 (0:00:00.218) 0:00:34.872 ******** 2026-01-05 00:44:04.436033 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-05 00:44:04.436037 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-05 00:44:04.436042 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-05 00:44:04.436045 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-05 00:44:04.436053 | orchestrator | 2026-01-05 00:44:04.436057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.436061 | orchestrator | Monday 05 January 2026 00:43:59 +0000 (0:00:00.942) 0:00:35.814 ******** 2026-01-05 00:44:04.436065 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.436068 | orchestrator | 2026-01-05 00:44:04.436072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.436076 | orchestrator | Monday 05 January 2026 00:43:59 +0000 (0:00:00.216) 0:00:36.031 ******** 2026-01-05 00:44:04.436080 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.436083 | orchestrator | 2026-01-05 00:44:04.436087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.436091 | orchestrator | Monday 05 January 2026 00:44:00 +0000 (0:00:00.750) 0:00:36.782 ******** 2026-01-05 00:44:04.436095 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.436099 | orchestrator | 2026-01-05 00:44:04.436102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:04.436106 | orchestrator | Monday 05 January 2026 00:44:00 +0000 (0:00:00.196) 0:00:36.978 ******** 2026-01-05 00:44:04.436110 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.436114 | orchestrator | 2026-01-05 00:44:04.436118 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:44:04.436124 | orchestrator | Monday 05 January 2026 00:44:00 +0000 (0:00:00.224) 0:00:37.203 ******** 2026-01-05 00:44:04.436128 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.436132 | orchestrator | 2026-01-05 00:44:04.436136 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:44:04.436140 | orchestrator | Monday 05 January 2026 00:44:00 +0000 (0:00:00.146) 0:00:37.350 ******** 2026-01-05 00:44:04.436144 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}}) 2026-01-05 00:44:04.436148 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7959794c-cc9c-59d9-9b66-2faefa464ed4'}}) 2026-01-05 00:44:04.436152 | orchestrator | 2026-01-05 00:44:04.436155 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:44:04.436159 | orchestrator | Monday 05 January 2026 00:44:00 +0000 (0:00:00.187) 0:00:37.537 ******** 2026-01-05 00:44:04.436165 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}) 2026-01-05 00:44:04.436170 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'}) 2026-01-05 00:44:04.436174 | orchestrator | 2026-01-05 00:44:04.436178 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:44:04.436182 | orchestrator | Monday 05 January 2026 00:44:02 +0000 (0:00:01.913) 0:00:39.451 ******** 2026-01-05 00:44:04.436185 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:04.436191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:04.436195 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:04.436198 | orchestrator | 2026-01-05 00:44:04.436202 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:44:04.436206 | orchestrator | Monday 05 January 2026 00:44:03 +0000 (0:00:00.168) 0:00:39.619 ******** 2026-01-05 00:44:04.436210 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}) 2026-01-05 00:44:04.436216 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'}) 2026-01-05 00:44:10.435460 | orchestrator | 2026-01-05 00:44:10.435618 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:44:10.435633 | orchestrator | Monday 05 January 2026 00:44:04 +0000 (0:00:01.346) 0:00:40.965 ******** 2026-01-05 00:44:10.435643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:10.435655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:10.435664 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.435674 | orchestrator | 2026-01-05 00:44:10.435683 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:44:10.435692 | orchestrator | Monday 05 January 2026 00:44:04 +0000 (0:00:00.183) 0:00:41.148 ******** 2026-01-05 00:44:10.435701 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.435710 | orchestrator | 2026-01-05 00:44:10.435719 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:44:10.435727 | orchestrator | Monday 05 January 2026 00:44:04 +0000 (0:00:00.136) 0:00:41.285 ******** 2026-01-05 00:44:10.435736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:10.435745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:10.435754 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.435762 | orchestrator | 2026-01-05 00:44:10.435771 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:44:10.435780 | orchestrator | Monday 05 January 2026 00:44:04 +0000 (0:00:00.182) 0:00:41.468 ******** 2026-01-05 00:44:10.435788 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.435797 | orchestrator | 2026-01-05 00:44:10.435805 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:44:10.435814 | orchestrator | Monday 05 January 2026 00:44:05 +0000 (0:00:00.144) 0:00:41.612 ******** 2026-01-05 00:44:10.435823 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:10.435832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:10.435841 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.435849 | orchestrator | 2026-01-05 00:44:10.435858 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:44:10.435885 | orchestrator | Monday 05 January 2026 00:44:05 +0000 (0:00:00.381) 0:00:41.994 ******** 2026-01-05 00:44:10.435894 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.435903 | orchestrator | 2026-01-05 00:44:10.435912 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:44:10.435920 | orchestrator | Monday 05 January 2026 00:44:05 +0000 (0:00:00.179) 0:00:42.173 ******** 2026-01-05 00:44:10.435929 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:10.435938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:10.435947 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.435955 | orchestrator | 2026-01-05 00:44:10.435964 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:44:10.435972 | orchestrator | Monday 05 January 2026 00:44:05 +0000 (0:00:00.157) 0:00:42.330 ******** 2026-01-05 00:44:10.435981 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:10.436014 | orchestrator | 2026-01-05 00:44:10.436026 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:44:10.436036 | orchestrator | Monday 05 January 2026 00:44:05 +0000 (0:00:00.159) 0:00:42.490 ******** 2026-01-05 00:44:10.436046 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:10.436056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:10.436066 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436076 | orchestrator | 2026-01-05 00:44:10.436086 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:44:10.436096 | orchestrator | Monday 05 January 2026 00:44:06 +0000 (0:00:00.171) 0:00:42.662 ******** 2026-01-05 00:44:10.436106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:10.436117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:10.436127 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436137 | orchestrator | 2026-01-05 00:44:10.436146 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:44:10.436171 | orchestrator | Monday 05 January 2026 00:44:06 +0000 (0:00:00.174) 0:00:42.837 ******** 2026-01-05 00:44:10.436181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:10.436189 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:10.436198 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436206 | orchestrator | 2026-01-05 00:44:10.436215 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:44:10.436223 | orchestrator | Monday 05 January 2026 00:44:06 +0000 (0:00:00.171) 0:00:43.008 ******** 2026-01-05 00:44:10.436232 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436241 | orchestrator | 2026-01-05 00:44:10.436249 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:44:10.436258 | orchestrator | Monday 05 January 2026 00:44:06 +0000 (0:00:00.171) 0:00:43.180 ******** 2026-01-05 00:44:10.436266 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436275 | orchestrator | 2026-01-05 00:44:10.436283 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:44:10.436292 | orchestrator | Monday 05 January 2026 00:44:06 +0000 (0:00:00.210) 0:00:43.390 ******** 2026-01-05 00:44:10.436300 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436309 | orchestrator | 2026-01-05 00:44:10.436317 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:44:10.436326 | orchestrator | Monday 05 January 2026 00:44:07 +0000 (0:00:00.165) 0:00:43.556 ******** 2026-01-05 00:44:10.436335 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:44:10.436343 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:44:10.436352 | orchestrator | } 2026-01-05 00:44:10.436361 | orchestrator | 2026-01-05 00:44:10.436369 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:44:10.436378 | orchestrator | Monday 05 January 2026 00:44:07 +0000 (0:00:00.152) 0:00:43.708 ******** 2026-01-05 00:44:10.436387 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:44:10.436395 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:44:10.436404 | orchestrator | } 2026-01-05 00:44:10.436412 | orchestrator | 2026-01-05 00:44:10.436421 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:44:10.436430 | orchestrator | Monday 05 January 2026 00:44:07 +0000 (0:00:00.177) 0:00:43.886 ******** 2026-01-05 00:44:10.436445 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:44:10.436454 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:44:10.436463 | orchestrator | } 2026-01-05 00:44:10.436488 | orchestrator | 2026-01-05 00:44:10.436497 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:44:10.436505 | orchestrator | Monday 05 January 2026 00:44:07 +0000 (0:00:00.401) 0:00:44.287 ******** 2026-01-05 00:44:10.436514 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:10.436522 | orchestrator | 2026-01-05 00:44:10.436531 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:44:10.436540 | orchestrator | Monday 05 January 2026 00:44:08 +0000 (0:00:00.532) 0:00:44.820 ******** 2026-01-05 00:44:10.436548 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:10.436557 | orchestrator | 2026-01-05 00:44:10.436566 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:44:10.436574 | orchestrator | Monday 05 January 2026 00:44:08 +0000 (0:00:00.545) 0:00:45.366 ******** 2026-01-05 00:44:10.436583 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:10.436591 | orchestrator | 2026-01-05 00:44:10.436600 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:44:10.436609 | orchestrator | Monday 05 January 2026 00:44:09 +0000 (0:00:00.517) 0:00:45.883 ******** 2026-01-05 00:44:10.436617 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:10.436626 | orchestrator | 2026-01-05 00:44:10.436634 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:44:10.436643 | orchestrator | Monday 05 January 2026 00:44:09 +0000 (0:00:00.144) 0:00:46.028 ******** 2026-01-05 00:44:10.436652 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436660 | orchestrator | 2026-01-05 00:44:10.436675 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:44:10.436684 | orchestrator | Monday 05 January 2026 00:44:09 +0000 (0:00:00.110) 0:00:46.139 ******** 2026-01-05 00:44:10.436693 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436701 | orchestrator | 2026-01-05 00:44:10.436710 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:44:10.436718 | orchestrator | Monday 05 January 2026 00:44:09 +0000 (0:00:00.113) 0:00:46.252 ******** 2026-01-05 00:44:10.436727 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:44:10.436736 | orchestrator |  "vgs_report": { 2026-01-05 00:44:10.436745 | orchestrator |  "vg": [] 2026-01-05 00:44:10.436754 | orchestrator |  } 2026-01-05 00:44:10.436763 | orchestrator | } 2026-01-05 00:44:10.436771 | orchestrator | 2026-01-05 00:44:10.436780 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:44:10.436789 | orchestrator | Monday 05 January 2026 00:44:09 +0000 (0:00:00.140) 0:00:46.393 ******** 2026-01-05 00:44:10.436797 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436806 | orchestrator | 2026-01-05 00:44:10.436814 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:44:10.436823 | orchestrator | Monday 05 January 2026 00:44:09 +0000 (0:00:00.143) 0:00:46.537 ******** 2026-01-05 00:44:10.436832 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436840 | orchestrator | 2026-01-05 00:44:10.436849 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:44:10.436857 | orchestrator | Monday 05 January 2026 00:44:10 +0000 (0:00:00.132) 0:00:46.670 ******** 2026-01-05 00:44:10.436866 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436874 | orchestrator | 2026-01-05 00:44:10.436883 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:44:10.436892 | orchestrator | Monday 05 January 2026 00:44:10 +0000 (0:00:00.162) 0:00:46.833 ******** 2026-01-05 00:44:10.436900 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:10.436909 | orchestrator | 2026-01-05 00:44:10.436924 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:44:15.274368 | orchestrator | Monday 05 January 2026 00:44:10 +0000 (0:00:00.136) 0:00:46.969 ******** 2026-01-05 00:44:15.274564 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.274583 | orchestrator | 2026-01-05 00:44:15.274632 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:44:15.274645 | orchestrator | Monday 05 January 2026 00:44:10 +0000 (0:00:00.437) 0:00:47.407 ******** 2026-01-05 00:44:15.274655 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.274665 | orchestrator | 2026-01-05 00:44:15.274675 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:44:15.274685 | orchestrator | Monday 05 January 2026 00:44:11 +0000 (0:00:00.141) 0:00:47.549 ******** 2026-01-05 00:44:15.274695 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.274704 | orchestrator | 2026-01-05 00:44:15.274714 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:44:15.274724 | orchestrator | Monday 05 January 2026 00:44:11 +0000 (0:00:00.139) 0:00:47.688 ******** 2026-01-05 00:44:15.274735 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.274752 | orchestrator | 2026-01-05 00:44:15.274769 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:44:15.274786 | orchestrator | Monday 05 January 2026 00:44:11 +0000 (0:00:00.129) 0:00:47.817 ******** 2026-01-05 00:44:15.274801 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.274816 | orchestrator | 2026-01-05 00:44:15.274832 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:44:15.274847 | orchestrator | Monday 05 January 2026 00:44:11 +0000 (0:00:00.147) 0:00:47.965 ******** 2026-01-05 00:44:15.274861 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.274877 | orchestrator | 2026-01-05 00:44:15.274894 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:44:15.274909 | orchestrator | Monday 05 January 2026 00:44:11 +0000 (0:00:00.143) 0:00:48.109 ******** 2026-01-05 00:44:15.274926 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.274941 | orchestrator | 2026-01-05 00:44:15.274951 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:44:15.274961 | orchestrator | Monday 05 January 2026 00:44:11 +0000 (0:00:00.135) 0:00:48.244 ******** 2026-01-05 00:44:15.274970 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.274979 | orchestrator | 2026-01-05 00:44:15.274989 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:44:15.274999 | orchestrator | Monday 05 January 2026 00:44:11 +0000 (0:00:00.151) 0:00:48.395 ******** 2026-01-05 00:44:15.275012 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275028 | orchestrator | 2026-01-05 00:44:15.275045 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:44:15.275060 | orchestrator | Monday 05 January 2026 00:44:12 +0000 (0:00:00.165) 0:00:48.561 ******** 2026-01-05 00:44:15.275075 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275092 | orchestrator | 2026-01-05 00:44:15.275108 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:44:15.275144 | orchestrator | Monday 05 January 2026 00:44:12 +0000 (0:00:00.142) 0:00:48.703 ******** 2026-01-05 00:44:15.275157 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275178 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275187 | orchestrator | 2026-01-05 00:44:15.275197 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:44:15.275207 | orchestrator | Monday 05 January 2026 00:44:12 +0000 (0:00:00.146) 0:00:48.850 ******** 2026-01-05 00:44:15.275216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275247 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275256 | orchestrator | 2026-01-05 00:44:15.275266 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:44:15.275275 | orchestrator | Monday 05 January 2026 00:44:12 +0000 (0:00:00.157) 0:00:49.007 ******** 2026-01-05 00:44:15.275285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275303 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275313 | orchestrator | 2026-01-05 00:44:15.275322 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:44:15.275332 | orchestrator | Monday 05 January 2026 00:44:12 +0000 (0:00:00.150) 0:00:49.157 ******** 2026-01-05 00:44:15.275341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275360 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275370 | orchestrator | 2026-01-05 00:44:15.275401 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:44:15.275412 | orchestrator | Monday 05 January 2026 00:44:12 +0000 (0:00:00.373) 0:00:49.530 ******** 2026-01-05 00:44:15.275421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275431 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275441 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275450 | orchestrator | 2026-01-05 00:44:15.275484 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:44:15.275496 | orchestrator | Monday 05 January 2026 00:44:13 +0000 (0:00:00.155) 0:00:49.685 ******** 2026-01-05 00:44:15.275506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275525 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275535 | orchestrator | 2026-01-05 00:44:15.275544 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:44:15.275554 | orchestrator | Monday 05 January 2026 00:44:13 +0000 (0:00:00.158) 0:00:49.844 ******** 2026-01-05 00:44:15.275564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275583 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275618 | orchestrator | 2026-01-05 00:44:15.275629 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:44:15.275639 | orchestrator | Monday 05 January 2026 00:44:13 +0000 (0:00:00.161) 0:00:50.006 ******** 2026-01-05 00:44:15.275657 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275683 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275692 | orchestrator | 2026-01-05 00:44:15.275702 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:44:15.275712 | orchestrator | Monday 05 January 2026 00:44:13 +0000 (0:00:00.152) 0:00:50.159 ******** 2026-01-05 00:44:15.275721 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:15.275731 | orchestrator | 2026-01-05 00:44:15.275740 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:44:15.275750 | orchestrator | Monday 05 January 2026 00:44:14 +0000 (0:00:00.517) 0:00:50.677 ******** 2026-01-05 00:44:15.275760 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:15.275769 | orchestrator | 2026-01-05 00:44:15.275779 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:44:15.275789 | orchestrator | Monday 05 January 2026 00:44:14 +0000 (0:00:00.526) 0:00:51.203 ******** 2026-01-05 00:44:15.275798 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:15.275808 | orchestrator | 2026-01-05 00:44:15.275817 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:44:15.275827 | orchestrator | Monday 05 January 2026 00:44:14 +0000 (0:00:00.147) 0:00:51.350 ******** 2026-01-05 00:44:15.275837 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'vg_name': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'}) 2026-01-05 00:44:15.275848 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'vg_name': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}) 2026-01-05 00:44:15.275858 | orchestrator | 2026-01-05 00:44:15.275867 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:44:15.275877 | orchestrator | Monday 05 January 2026 00:44:14 +0000 (0:00:00.161) 0:00:51.512 ******** 2026-01-05 00:44:15.275886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:15.275905 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:15.275915 | orchestrator | 2026-01-05 00:44:15.275931 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:44:15.275947 | orchestrator | Monday 05 January 2026 00:44:15 +0000 (0:00:00.155) 0:00:51.667 ******** 2026-01-05 00:44:15.275963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:15.275988 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:21.763535 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:21.763652 | orchestrator | 2026-01-05 00:44:21.763664 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:44:21.763674 | orchestrator | Monday 05 January 2026 00:44:15 +0000 (0:00:00.138) 0:00:51.806 ******** 2026-01-05 00:44:21.763681 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'})  2026-01-05 00:44:21.763690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'})  2026-01-05 00:44:21.763696 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:21.763726 | orchestrator | 2026-01-05 00:44:21.763734 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:44:21.763741 | orchestrator | Monday 05 January 2026 00:44:15 +0000 (0:00:00.149) 0:00:51.955 ******** 2026-01-05 00:44:21.763748 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:44:21.763755 | orchestrator |  "lvm_report": { 2026-01-05 00:44:21.763764 | orchestrator |  "lv": [ 2026-01-05 00:44:21.763771 | orchestrator |  { 2026-01-05 00:44:21.763778 | orchestrator |  "lv_name": "osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4", 2026-01-05 00:44:21.763786 | orchestrator |  "vg_name": "ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4" 2026-01-05 00:44:21.763793 | orchestrator |  }, 2026-01-05 00:44:21.763799 | orchestrator |  { 2026-01-05 00:44:21.763806 | orchestrator |  "lv_name": "osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4", 2026-01-05 00:44:21.763813 | orchestrator |  "vg_name": "ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4" 2026-01-05 00:44:21.763820 | orchestrator |  } 2026-01-05 00:44:21.763827 | orchestrator |  ], 2026-01-05 00:44:21.763834 | orchestrator |  "pv": [ 2026-01-05 00:44:21.763842 | orchestrator |  { 2026-01-05 00:44:21.763849 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:44:21.763856 | orchestrator |  "vg_name": "ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4" 2026-01-05 00:44:21.763864 | orchestrator |  }, 2026-01-05 00:44:21.763871 | orchestrator |  { 2026-01-05 00:44:21.763878 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:44:21.763885 | orchestrator |  "vg_name": "ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4" 2026-01-05 00:44:21.763893 | orchestrator |  } 2026-01-05 00:44:21.763900 | orchestrator |  ] 2026-01-05 00:44:21.763907 | orchestrator |  } 2026-01-05 00:44:21.763915 | orchestrator | } 2026-01-05 00:44:21.763923 | orchestrator | 2026-01-05 00:44:21.763930 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:44:21.763937 | orchestrator | 2026-01-05 00:44:21.763944 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:44:21.763952 | orchestrator | Monday 05 January 2026 00:44:15 +0000 (0:00:00.387) 0:00:52.342 ******** 2026-01-05 00:44:21.763960 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:44:21.763967 | orchestrator | 2026-01-05 00:44:21.763976 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:44:21.763983 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:00.253) 0:00:52.596 ******** 2026-01-05 00:44:21.763991 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:21.763998 | orchestrator | 2026-01-05 00:44:21.764006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764014 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:00.259) 0:00:52.856 ******** 2026-01-05 00:44:21.764021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:44:21.764029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:44:21.764037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:44:21.764044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:44:21.764052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:44:21.764059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:44:21.764066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:44:21.764073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:44:21.764079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-05 00:44:21.764092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:44:21.764100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:44:21.764106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:44:21.764114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:44:21.764121 | orchestrator | 2026-01-05 00:44:21.764134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764142 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:00.467) 0:00:53.324 ******** 2026-01-05 00:44:21.764148 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:21.764154 | orchestrator | 2026-01-05 00:44:21.764161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764167 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:00.214) 0:00:53.538 ******** 2026-01-05 00:44:21.764173 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:21.764179 | orchestrator | 2026-01-05 00:44:21.764186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764210 | orchestrator | Monday 05 January 2026 00:44:17 +0000 (0:00:00.227) 0:00:53.766 ******** 2026-01-05 00:44:21.764218 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:21.764225 | orchestrator | 2026-01-05 00:44:21.764232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764239 | orchestrator | Monday 05 January 2026 00:44:17 +0000 (0:00:00.201) 0:00:53.967 ******** 2026-01-05 00:44:21.764245 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:21.764252 | orchestrator | 2026-01-05 00:44:21.764260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764319 | orchestrator | Monday 05 January 2026 00:44:17 +0000 (0:00:00.214) 0:00:54.182 ******** 2026-01-05 00:44:21.764327 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:21.764333 | orchestrator | 2026-01-05 00:44:21.764340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764346 | orchestrator | Monday 05 January 2026 00:44:17 +0000 (0:00:00.198) 0:00:54.381 ******** 2026-01-05 00:44:21.764353 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:21.764359 | orchestrator | 2026-01-05 00:44:21.764365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764371 | orchestrator | Monday 05 January 2026 00:44:18 +0000 (0:00:00.794) 0:00:55.176 ******** 2026-01-05 00:44:21.764377 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:21.764383 | orchestrator | 2026-01-05 00:44:21.764389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764396 | orchestrator | Monday 05 January 2026 00:44:18 +0000 (0:00:00.237) 0:00:55.413 ******** 2026-01-05 00:44:21.764403 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:21.764410 | orchestrator | 2026-01-05 00:44:21.764417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764424 | orchestrator | Monday 05 January 2026 00:44:19 +0000 (0:00:00.232) 0:00:55.646 ******** 2026-01-05 00:44:21.764431 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab) 2026-01-05 00:44:21.764440 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab) 2026-01-05 00:44:21.764448 | orchestrator | 2026-01-05 00:44:21.764478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764491 | orchestrator | Monday 05 January 2026 00:44:19 +0000 (0:00:00.444) 0:00:56.091 ******** 2026-01-05 00:44:21.764497 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421) 2026-01-05 00:44:21.764504 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421) 2026-01-05 00:44:21.764511 | orchestrator | 2026-01-05 00:44:21.764525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764537 | orchestrator | Monday 05 January 2026 00:44:20 +0000 (0:00:00.465) 0:00:56.557 ******** 2026-01-05 00:44:21.764544 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678) 2026-01-05 00:44:21.764551 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678) 2026-01-05 00:44:21.764558 | orchestrator | 2026-01-05 00:44:21.764564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764571 | orchestrator | Monday 05 January 2026 00:44:20 +0000 (0:00:00.472) 0:00:57.029 ******** 2026-01-05 00:44:21.764578 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a) 2026-01-05 00:44:21.764585 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a) 2026-01-05 00:44:21.764592 | orchestrator | 2026-01-05 00:44:21.764599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:44:21.764606 | orchestrator | Monday 05 January 2026 00:44:20 +0000 (0:00:00.457) 0:00:57.487 ******** 2026-01-05 00:44:21.764612 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:44:21.764618 | orchestrator | 2026-01-05 00:44:21.764626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:21.764633 | orchestrator | Monday 05 January 2026 00:44:21 +0000 (0:00:00.364) 0:00:57.851 ******** 2026-01-05 00:44:21.764638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:44:21.764644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:44:21.764651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:44:21.764657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:44:21.764663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:44:21.764670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:44:21.764676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:44:21.764682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:44:21.764689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-05 00:44:21.764695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:44:21.764702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:44:21.764717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:44:31.059995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:44:31.060118 | orchestrator | 2026-01-05 00:44:31.060136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060149 | orchestrator | Monday 05 January 2026 00:44:21 +0000 (0:00:00.440) 0:00:58.292 ******** 2026-01-05 00:44:31.060161 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060173 | orchestrator | 2026-01-05 00:44:31.060185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060196 | orchestrator | Monday 05 January 2026 00:44:21 +0000 (0:00:00.203) 0:00:58.496 ******** 2026-01-05 00:44:31.060207 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060217 | orchestrator | 2026-01-05 00:44:31.060228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060239 | orchestrator | Monday 05 January 2026 00:44:22 +0000 (0:00:00.714) 0:00:59.211 ******** 2026-01-05 00:44:31.060276 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060288 | orchestrator | 2026-01-05 00:44:31.060299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060310 | orchestrator | Monday 05 January 2026 00:44:22 +0000 (0:00:00.205) 0:00:59.417 ******** 2026-01-05 00:44:31.060320 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060331 | orchestrator | 2026-01-05 00:44:31.060342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060353 | orchestrator | Monday 05 January 2026 00:44:23 +0000 (0:00:00.207) 0:00:59.624 ******** 2026-01-05 00:44:31.060364 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060374 | orchestrator | 2026-01-05 00:44:31.060385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060396 | orchestrator | Monday 05 January 2026 00:44:23 +0000 (0:00:00.220) 0:00:59.845 ******** 2026-01-05 00:44:31.060407 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060417 | orchestrator | 2026-01-05 00:44:31.060428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060469 | orchestrator | Monday 05 January 2026 00:44:23 +0000 (0:00:00.241) 0:01:00.086 ******** 2026-01-05 00:44:31.060490 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060504 | orchestrator | 2026-01-05 00:44:31.060517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060530 | orchestrator | Monday 05 January 2026 00:44:23 +0000 (0:00:00.215) 0:01:00.301 ******** 2026-01-05 00:44:31.060542 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060555 | orchestrator | 2026-01-05 00:44:31.060567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060579 | orchestrator | Monday 05 January 2026 00:44:24 +0000 (0:00:00.250) 0:01:00.551 ******** 2026-01-05 00:44:31.060609 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-05 00:44:31.060623 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-05 00:44:31.060636 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-05 00:44:31.060648 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-05 00:44:31.060661 | orchestrator | 2026-01-05 00:44:31.060674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060687 | orchestrator | Monday 05 January 2026 00:44:24 +0000 (0:00:00.689) 0:01:01.241 ******** 2026-01-05 00:44:31.060700 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060713 | orchestrator | 2026-01-05 00:44:31.060726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060739 | orchestrator | Monday 05 January 2026 00:44:24 +0000 (0:00:00.206) 0:01:01.447 ******** 2026-01-05 00:44:31.060752 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060765 | orchestrator | 2026-01-05 00:44:31.060778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060792 | orchestrator | Monday 05 January 2026 00:44:25 +0000 (0:00:00.198) 0:01:01.646 ******** 2026-01-05 00:44:31.060811 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060831 | orchestrator | 2026-01-05 00:44:31.060852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:44:31.060871 | orchestrator | Monday 05 January 2026 00:44:25 +0000 (0:00:00.222) 0:01:01.868 ******** 2026-01-05 00:44:31.060884 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060897 | orchestrator | 2026-01-05 00:44:31.060910 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:44:31.060923 | orchestrator | Monday 05 January 2026 00:44:25 +0000 (0:00:00.217) 0:01:02.086 ******** 2026-01-05 00:44:31.060935 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.060945 | orchestrator | 2026-01-05 00:44:31.060956 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:44:31.060967 | orchestrator | Monday 05 January 2026 00:44:25 +0000 (0:00:00.345) 0:01:02.431 ******** 2026-01-05 00:44:31.060979 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1631feb6-d96c-5a43-89dd-a558edd73d68'}}) 2026-01-05 00:44:31.061010 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c322448e-6042-58d0-bdfa-5021630018c9'}}) 2026-01-05 00:44:31.061028 | orchestrator | 2026-01-05 00:44:31.061046 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:44:31.061063 | orchestrator | Monday 05 January 2026 00:44:26 +0000 (0:00:00.213) 0:01:02.644 ******** 2026-01-05 00:44:31.061076 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'}) 2026-01-05 00:44:31.061088 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'}) 2026-01-05 00:44:31.061099 | orchestrator | 2026-01-05 00:44:31.061110 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:44:31.061156 | orchestrator | Monday 05 January 2026 00:44:27 +0000 (0:00:01.869) 0:01:04.514 ******** 2026-01-05 00:44:31.061186 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:31.061207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:31.061224 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.061241 | orchestrator | 2026-01-05 00:44:31.061259 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:44:31.061279 | orchestrator | Monday 05 January 2026 00:44:28 +0000 (0:00:00.152) 0:01:04.666 ******** 2026-01-05 00:44:31.061298 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'}) 2026-01-05 00:44:31.061315 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'}) 2026-01-05 00:44:31.061332 | orchestrator | 2026-01-05 00:44:31.061349 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:44:31.061368 | orchestrator | Monday 05 January 2026 00:44:29 +0000 (0:00:01.288) 0:01:05.955 ******** 2026-01-05 00:44:31.061387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:31.061405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:31.061422 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.061591 | orchestrator | 2026-01-05 00:44:31.061612 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:44:31.061624 | orchestrator | Monday 05 January 2026 00:44:29 +0000 (0:00:00.161) 0:01:06.117 ******** 2026-01-05 00:44:31.061635 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.061646 | orchestrator | 2026-01-05 00:44:31.061657 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:44:31.061668 | orchestrator | Monday 05 January 2026 00:44:29 +0000 (0:00:00.136) 0:01:06.253 ******** 2026-01-05 00:44:31.061689 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:31.061701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:31.061712 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.061722 | orchestrator | 2026-01-05 00:44:31.061733 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:44:31.061786 | orchestrator | Monday 05 January 2026 00:44:29 +0000 (0:00:00.157) 0:01:06.411 ******** 2026-01-05 00:44:31.061799 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.061810 | orchestrator | 2026-01-05 00:44:31.061820 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:44:31.061831 | orchestrator | Monday 05 January 2026 00:44:30 +0000 (0:00:00.148) 0:01:06.559 ******** 2026-01-05 00:44:31.061842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:31.061853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:31.061864 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.061875 | orchestrator | 2026-01-05 00:44:31.061885 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:44:31.061896 | orchestrator | Monday 05 January 2026 00:44:30 +0000 (0:00:00.159) 0:01:06.719 ******** 2026-01-05 00:44:31.061907 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.061917 | orchestrator | 2026-01-05 00:44:31.061928 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:44:31.061939 | orchestrator | Monday 05 January 2026 00:44:30 +0000 (0:00:00.155) 0:01:06.874 ******** 2026-01-05 00:44:31.061950 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:31.061961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:31.061972 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:31.061982 | orchestrator | 2026-01-05 00:44:31.061993 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:44:31.062004 | orchestrator | Monday 05 January 2026 00:44:30 +0000 (0:00:00.154) 0:01:07.028 ******** 2026-01-05 00:44:31.062085 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:31.062097 | orchestrator | 2026-01-05 00:44:31.062108 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:44:31.062119 | orchestrator | Monday 05 January 2026 00:44:30 +0000 (0:00:00.387) 0:01:07.416 ******** 2026-01-05 00:44:31.062144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:37.879593 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:37.879727 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.879754 | orchestrator | 2026-01-05 00:44:37.879775 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:44:37.879796 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:00.176) 0:01:07.593 ******** 2026-01-05 00:44:37.879817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:37.879838 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:37.879856 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.879873 | orchestrator | 2026-01-05 00:44:37.879893 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:44:37.879912 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:00.166) 0:01:07.759 ******** 2026-01-05 00:44:37.879929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:37.879947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:37.880012 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.880031 | orchestrator | 2026-01-05 00:44:37.880049 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:44:37.880068 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:00.153) 0:01:07.913 ******** 2026-01-05 00:44:37.880259 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.880287 | orchestrator | 2026-01-05 00:44:37.880305 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:44:37.880323 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:00.158) 0:01:08.071 ******** 2026-01-05 00:44:37.880342 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.880361 | orchestrator | 2026-01-05 00:44:37.880380 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:44:37.880398 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:00.157) 0:01:08.228 ******** 2026-01-05 00:44:37.880417 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.880510 | orchestrator | 2026-01-05 00:44:37.880533 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:44:37.880551 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:00.142) 0:01:08.371 ******** 2026-01-05 00:44:37.880569 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:44:37.880587 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:44:37.880605 | orchestrator | } 2026-01-05 00:44:37.880624 | orchestrator | 2026-01-05 00:44:37.880642 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:44:37.880659 | orchestrator | Monday 05 January 2026 00:44:32 +0000 (0:00:00.175) 0:01:08.546 ******** 2026-01-05 00:44:37.880678 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:44:37.880695 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:44:37.880713 | orchestrator | } 2026-01-05 00:44:37.880731 | orchestrator | 2026-01-05 00:44:37.880749 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:44:37.880766 | orchestrator | Monday 05 January 2026 00:44:32 +0000 (0:00:00.162) 0:01:08.708 ******** 2026-01-05 00:44:37.880785 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:44:37.880803 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:44:37.880821 | orchestrator | } 2026-01-05 00:44:37.880838 | orchestrator | 2026-01-05 00:44:37.880855 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:44:37.880873 | orchestrator | Monday 05 January 2026 00:44:32 +0000 (0:00:00.178) 0:01:08.886 ******** 2026-01-05 00:44:37.880891 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:37.880911 | orchestrator | 2026-01-05 00:44:37.880928 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:44:37.880947 | orchestrator | Monday 05 January 2026 00:44:32 +0000 (0:00:00.585) 0:01:09.472 ******** 2026-01-05 00:44:37.880963 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:37.880980 | orchestrator | 2026-01-05 00:44:37.880998 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:44:37.881015 | orchestrator | Monday 05 January 2026 00:44:33 +0000 (0:00:00.545) 0:01:10.017 ******** 2026-01-05 00:44:37.881034 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:37.881051 | orchestrator | 2026-01-05 00:44:37.881068 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:44:37.881086 | orchestrator | Monday 05 January 2026 00:44:34 +0000 (0:00:00.811) 0:01:10.828 ******** 2026-01-05 00:44:37.881103 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:37.881121 | orchestrator | 2026-01-05 00:44:37.881139 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:44:37.881158 | orchestrator | Monday 05 January 2026 00:44:34 +0000 (0:00:00.167) 0:01:10.996 ******** 2026-01-05 00:44:37.881176 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.881194 | orchestrator | 2026-01-05 00:44:37.881211 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:44:37.881256 | orchestrator | Monday 05 January 2026 00:44:34 +0000 (0:00:00.166) 0:01:11.162 ******** 2026-01-05 00:44:37.881275 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.881293 | orchestrator | 2026-01-05 00:44:37.881312 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:44:37.881358 | orchestrator | Monday 05 January 2026 00:44:34 +0000 (0:00:00.167) 0:01:11.330 ******** 2026-01-05 00:44:37.881377 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:44:37.881395 | orchestrator |  "vgs_report": { 2026-01-05 00:44:37.881415 | orchestrator |  "vg": [] 2026-01-05 00:44:37.881508 | orchestrator |  } 2026-01-05 00:44:37.881532 | orchestrator | } 2026-01-05 00:44:37.881552 | orchestrator | 2026-01-05 00:44:37.881573 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:44:37.881592 | orchestrator | Monday 05 January 2026 00:44:34 +0000 (0:00:00.176) 0:01:11.507 ******** 2026-01-05 00:44:37.881611 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.881631 | orchestrator | 2026-01-05 00:44:37.881650 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:44:37.881670 | orchestrator | Monday 05 January 2026 00:44:35 +0000 (0:00:00.160) 0:01:11.668 ******** 2026-01-05 00:44:37.881689 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.881709 | orchestrator | 2026-01-05 00:44:37.881728 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:44:37.881748 | orchestrator | Monday 05 January 2026 00:44:35 +0000 (0:00:00.181) 0:01:11.849 ******** 2026-01-05 00:44:37.881768 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.881787 | orchestrator | 2026-01-05 00:44:37.881807 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:44:37.881826 | orchestrator | Monday 05 January 2026 00:44:35 +0000 (0:00:00.178) 0:01:12.028 ******** 2026-01-05 00:44:37.881844 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.881863 | orchestrator | 2026-01-05 00:44:37.881883 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:44:37.881903 | orchestrator | Monday 05 January 2026 00:44:35 +0000 (0:00:00.153) 0:01:12.181 ******** 2026-01-05 00:44:37.881922 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.881941 | orchestrator | 2026-01-05 00:44:37.881961 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:44:37.881979 | orchestrator | Monday 05 January 2026 00:44:35 +0000 (0:00:00.146) 0:01:12.328 ******** 2026-01-05 00:44:37.881998 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882102 | orchestrator | 2026-01-05 00:44:37.882128 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:44:37.882146 | orchestrator | Monday 05 January 2026 00:44:35 +0000 (0:00:00.140) 0:01:12.469 ******** 2026-01-05 00:44:37.882163 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882181 | orchestrator | 2026-01-05 00:44:37.882199 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:44:37.882218 | orchestrator | Monday 05 January 2026 00:44:36 +0000 (0:00:00.143) 0:01:12.612 ******** 2026-01-05 00:44:37.882236 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882254 | orchestrator | 2026-01-05 00:44:37.882272 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:44:37.882290 | orchestrator | Monday 05 January 2026 00:44:36 +0000 (0:00:00.366) 0:01:12.979 ******** 2026-01-05 00:44:37.882307 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882327 | orchestrator | 2026-01-05 00:44:37.882357 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:44:37.882376 | orchestrator | Monday 05 January 2026 00:44:36 +0000 (0:00:00.149) 0:01:13.128 ******** 2026-01-05 00:44:37.882395 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882413 | orchestrator | 2026-01-05 00:44:37.882460 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:44:37.882500 | orchestrator | Monday 05 January 2026 00:44:36 +0000 (0:00:00.149) 0:01:13.278 ******** 2026-01-05 00:44:37.882519 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882538 | orchestrator | 2026-01-05 00:44:37.882556 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:44:37.882576 | orchestrator | Monday 05 January 2026 00:44:36 +0000 (0:00:00.189) 0:01:13.467 ******** 2026-01-05 00:44:37.882595 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882612 | orchestrator | 2026-01-05 00:44:37.882631 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:44:37.882650 | orchestrator | Monday 05 January 2026 00:44:37 +0000 (0:00:00.127) 0:01:13.595 ******** 2026-01-05 00:44:37.882668 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882686 | orchestrator | 2026-01-05 00:44:37.882704 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:44:37.882721 | orchestrator | Monday 05 January 2026 00:44:37 +0000 (0:00:00.141) 0:01:13.737 ******** 2026-01-05 00:44:37.882739 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882757 | orchestrator | 2026-01-05 00:44:37.882775 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:44:37.882793 | orchestrator | Monday 05 January 2026 00:44:37 +0000 (0:00:00.145) 0:01:13.882 ******** 2026-01-05 00:44:37.882811 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:37.882830 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:37.882849 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882868 | orchestrator | 2026-01-05 00:44:37.882887 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:44:37.882906 | orchestrator | Monday 05 January 2026 00:44:37 +0000 (0:00:00.174) 0:01:14.057 ******** 2026-01-05 00:44:37.882924 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:37.882941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:37.882959 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:37.882977 | orchestrator | 2026-01-05 00:44:37.882994 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:44:37.883013 | orchestrator | Monday 05 January 2026 00:44:37 +0000 (0:00:00.171) 0:01:14.229 ******** 2026-01-05 00:44:37.883052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082244 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082353 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082363 | orchestrator | 2026-01-05 00:44:41.082371 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:44:41.082380 | orchestrator | Monday 05 January 2026 00:44:37 +0000 (0:00:00.183) 0:01:14.412 ******** 2026-01-05 00:44:41.082388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082401 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082408 | orchestrator | 2026-01-05 00:44:41.082414 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:44:41.082506 | orchestrator | Monday 05 January 2026 00:44:38 +0000 (0:00:00.172) 0:01:14.584 ******** 2026-01-05 00:44:41.082515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082529 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082535 | orchestrator | 2026-01-05 00:44:41.082542 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:44:41.082549 | orchestrator | Monday 05 January 2026 00:44:38 +0000 (0:00:00.162) 0:01:14.746 ******** 2026-01-05 00:44:41.082555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082585 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082592 | orchestrator | 2026-01-05 00:44:41.082598 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:44:41.082605 | orchestrator | Monday 05 January 2026 00:44:38 +0000 (0:00:00.411) 0:01:15.158 ******** 2026-01-05 00:44:41.082611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082625 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082632 | orchestrator | 2026-01-05 00:44:41.082639 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:44:41.082646 | orchestrator | Monday 05 January 2026 00:44:38 +0000 (0:00:00.175) 0:01:15.334 ******** 2026-01-05 00:44:41.082652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082659 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082665 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082672 | orchestrator | 2026-01-05 00:44:41.082678 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:44:41.082685 | orchestrator | Monday 05 January 2026 00:44:38 +0000 (0:00:00.193) 0:01:15.528 ******** 2026-01-05 00:44:41.082691 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:41.082699 | orchestrator | 2026-01-05 00:44:41.082706 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:44:41.082713 | orchestrator | Monday 05 January 2026 00:44:39 +0000 (0:00:00.523) 0:01:16.052 ******** 2026-01-05 00:44:41.082720 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:41.082726 | orchestrator | 2026-01-05 00:44:41.082733 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:44:41.082739 | orchestrator | Monday 05 January 2026 00:44:40 +0000 (0:00:00.531) 0:01:16.583 ******** 2026-01-05 00:44:41.082746 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:41.082752 | orchestrator | 2026-01-05 00:44:41.082759 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:44:41.082765 | orchestrator | Monday 05 January 2026 00:44:40 +0000 (0:00:00.176) 0:01:16.760 ******** 2026-01-05 00:44:41.082772 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'vg_name': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'}) 2026-01-05 00:44:41.082779 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'vg_name': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'}) 2026-01-05 00:44:41.082791 | orchestrator | 2026-01-05 00:44:41.082798 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:44:41.082805 | orchestrator | Monday 05 January 2026 00:44:40 +0000 (0:00:00.187) 0:01:16.947 ******** 2026-01-05 00:44:41.082828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082842 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082848 | orchestrator | 2026-01-05 00:44:41.082855 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:44:41.082863 | orchestrator | Monday 05 January 2026 00:44:40 +0000 (0:00:00.163) 0:01:17.111 ******** 2026-01-05 00:44:41.082870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082885 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082891 | orchestrator | 2026-01-05 00:44:41.082898 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:44:41.082905 | orchestrator | Monday 05 January 2026 00:44:40 +0000 (0:00:00.172) 0:01:17.284 ******** 2026-01-05 00:44:41.082911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'})  2026-01-05 00:44:41.082918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'})  2026-01-05 00:44:41.082925 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:41.082932 | orchestrator | 2026-01-05 00:44:41.082938 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:44:41.082945 | orchestrator | Monday 05 January 2026 00:44:40 +0000 (0:00:00.166) 0:01:17.451 ******** 2026-01-05 00:44:41.082951 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:44:41.082958 | orchestrator |  "lvm_report": { 2026-01-05 00:44:41.082966 | orchestrator |  "lv": [ 2026-01-05 00:44:41.082973 | orchestrator |  { 2026-01-05 00:44:41.082984 | orchestrator |  "lv_name": "osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68", 2026-01-05 00:44:41.082992 | orchestrator |  "vg_name": "ceph-1631feb6-d96c-5a43-89dd-a558edd73d68" 2026-01-05 00:44:41.082999 | orchestrator |  }, 2026-01-05 00:44:41.083006 | orchestrator |  { 2026-01-05 00:44:41.083013 | orchestrator |  "lv_name": "osd-block-c322448e-6042-58d0-bdfa-5021630018c9", 2026-01-05 00:44:41.083020 | orchestrator |  "vg_name": "ceph-c322448e-6042-58d0-bdfa-5021630018c9" 2026-01-05 00:44:41.083027 | orchestrator |  } 2026-01-05 00:44:41.083034 | orchestrator |  ], 2026-01-05 00:44:41.083042 | orchestrator |  "pv": [ 2026-01-05 00:44:41.083048 | orchestrator |  { 2026-01-05 00:44:41.083055 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:44:41.083062 | orchestrator |  "vg_name": "ceph-1631feb6-d96c-5a43-89dd-a558edd73d68" 2026-01-05 00:44:41.083069 | orchestrator |  }, 2026-01-05 00:44:41.083076 | orchestrator |  { 2026-01-05 00:44:41.083083 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:44:41.083090 | orchestrator |  "vg_name": "ceph-c322448e-6042-58d0-bdfa-5021630018c9" 2026-01-05 00:44:41.083096 | orchestrator |  } 2026-01-05 00:44:41.083103 | orchestrator |  ] 2026-01-05 00:44:41.083114 | orchestrator |  } 2026-01-05 00:44:41.083121 | orchestrator | } 2026-01-05 00:44:41.083128 | orchestrator | 2026-01-05 00:44:41.083135 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:44:41.083142 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:44:41.083149 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:44:41.083156 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:44:41.083163 | orchestrator | 2026-01-05 00:44:41.083169 | orchestrator | 2026-01-05 00:44:41.083176 | orchestrator | 2026-01-05 00:44:41.083182 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:44:41.083189 | orchestrator | Monday 05 January 2026 00:44:41 +0000 (0:00:00.151) 0:01:17.602 ******** 2026-01-05 00:44:41.083196 | orchestrator | =============================================================================== 2026-01-05 00:44:41.083202 | orchestrator | Create block VGs -------------------------------------------------------- 5.96s 2026-01-05 00:44:41.083209 | orchestrator | Create block LVs -------------------------------------------------------- 4.14s 2026-01-05 00:44:41.083215 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.88s 2026-01-05 00:44:41.083222 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.85s 2026-01-05 00:44:41.083229 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2026-01-05 00:44:41.083235 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2026-01-05 00:44:41.083242 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2026-01-05 00:44:41.083249 | orchestrator | Add known links to the list of available block devices ------------------ 1.53s 2026-01-05 00:44:41.083259 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2026-01-05 00:44:41.520171 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2026-01-05 00:44:41.520287 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-01-05 00:44:41.520302 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2026-01-05 00:44:41.520314 | orchestrator | Get initial list of available block devices ----------------------------- 0.81s 2026-01-05 00:44:41.520325 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-01-05 00:44:41.520336 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.78s 2026-01-05 00:44:41.520347 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2026-01-05 00:44:41.520358 | orchestrator | Calculate size needed for LVs on ceph_wal_devices ----------------------- 0.75s 2026-01-05 00:44:41.520369 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-01-05 00:44:41.520380 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.75s 2026-01-05 00:44:41.520391 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.73s 2026-01-05 00:44:53.997615 | orchestrator | 2026-01-05 00:44:53 | INFO  | Task c52ae6ec-ffe7-43ec-8861-5d78bd3e67f3 (facts) was prepared for execution. 2026-01-05 00:44:53.997740 | orchestrator | 2026-01-05 00:44:53 | INFO  | It takes a moment until task c52ae6ec-ffe7-43ec-8861-5d78bd3e67f3 (facts) has been started and output is visible here. 2026-01-05 00:45:06.494922 | orchestrator | 2026-01-05 00:45:06.495012 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 00:45:06.495019 | orchestrator | 2026-01-05 00:45:06.495024 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:45:06.495028 | orchestrator | Monday 05 January 2026 00:44:58 +0000 (0:00:00.248) 0:00:00.248 ******** 2026-01-05 00:45:06.495052 | orchestrator | ok: [testbed-manager] 2026-01-05 00:45:06.495058 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:06.495062 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:06.495066 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:06.495070 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:45:06.495075 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:45:06.495081 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:45:06.495087 | orchestrator | 2026-01-05 00:45:06.495093 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:45:06.495100 | orchestrator | Monday 05 January 2026 00:44:59 +0000 (0:00:01.030) 0:00:01.278 ******** 2026-01-05 00:45:06.495106 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:45:06.495114 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:45:06.495120 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:06.495126 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:06.495139 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.495145 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:06.495152 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:06.495157 | orchestrator | 2026-01-05 00:45:06.495163 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:45:06.495169 | orchestrator | 2026-01-05 00:45:06.495176 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:45:06.495182 | orchestrator | Monday 05 January 2026 00:45:00 +0000 (0:00:01.114) 0:00:02.393 ******** 2026-01-05 00:45:06.495188 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:06.495194 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:06.495200 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:06.495207 | orchestrator | ok: [testbed-manager] 2026-01-05 00:45:06.495213 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:45:06.495219 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:45:06.495226 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:45:06.495232 | orchestrator | 2026-01-05 00:45:06.495238 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:45:06.495245 | orchestrator | 2026-01-05 00:45:06.495251 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:45:06.495257 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:05.446) 0:00:07.840 ******** 2026-01-05 00:45:06.495263 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:45:06.495269 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:45:06.495275 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:06.495281 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:06.495287 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.495293 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:06.495299 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:06.495305 | orchestrator | 2026-01-05 00:45:06.495311 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:45:06.495317 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:45:06.495325 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:45:06.495331 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:45:06.495337 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:45:06.495344 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:45:06.495350 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:45:06.495372 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:45:06.495379 | orchestrator | 2026-01-05 00:45:06.495386 | orchestrator | 2026-01-05 00:45:06.495393 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:45:06.495468 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:00.468) 0:00:08.308 ******** 2026-01-05 00:45:06.495475 | orchestrator | =============================================================================== 2026-01-05 00:45:06.495481 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.45s 2026-01-05 00:45:06.495486 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2026-01-05 00:45:06.495491 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.03s 2026-01-05 00:45:06.495495 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-01-05 00:45:18.601423 | orchestrator | 2026-01-05 00:45:18 | INFO  | Task f1421d2b-08dd-4e50-8898-fae822b3c885 (frr) was prepared for execution. 2026-01-05 00:45:18.601544 | orchestrator | 2026-01-05 00:45:18 | INFO  | It takes a moment until task f1421d2b-08dd-4e50-8898-fae822b3c885 (frr) has been started and output is visible here. 2026-01-05 00:45:44.058356 | orchestrator | 2026-01-05 00:45:44.058545 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-05 00:45:44.058563 | orchestrator | 2026-01-05 00:45:44.058577 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-05 00:45:44.058613 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:00.228) 0:00:00.228 ******** 2026-01-05 00:45:44.058627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:45:44.058642 | orchestrator | 2026-01-05 00:45:44.058655 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-05 00:45:44.058668 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:00.220) 0:00:00.449 ******** 2026-01-05 00:45:44.058681 | orchestrator | changed: [testbed-manager] 2026-01-05 00:45:44.058695 | orchestrator | 2026-01-05 00:45:44.058708 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-05 00:45:44.058726 | orchestrator | Monday 05 January 2026 00:45:24 +0000 (0:00:01.236) 0:00:01.685 ******** 2026-01-05 00:45:44.058739 | orchestrator | changed: [testbed-manager] 2026-01-05 00:45:44.058751 | orchestrator | 2026-01-05 00:45:44.058764 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-05 00:45:44.058777 | orchestrator | Monday 05 January 2026 00:45:34 +0000 (0:00:10.281) 0:00:11.967 ******** 2026-01-05 00:45:44.058789 | orchestrator | ok: [testbed-manager] 2026-01-05 00:45:44.058802 | orchestrator | 2026-01-05 00:45:44.058815 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-05 00:45:44.058827 | orchestrator | Monday 05 January 2026 00:45:35 +0000 (0:00:00.969) 0:00:12.937 ******** 2026-01-05 00:45:44.058840 | orchestrator | changed: [testbed-manager] 2026-01-05 00:45:44.058852 | orchestrator | 2026-01-05 00:45:44.058863 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-05 00:45:44.058874 | orchestrator | Monday 05 January 2026 00:45:36 +0000 (0:00:00.939) 0:00:13.876 ******** 2026-01-05 00:45:44.058886 | orchestrator | ok: [testbed-manager] 2026-01-05 00:45:44.058898 | orchestrator | 2026-01-05 00:45:44.058910 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-05 00:45:44.058923 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:01.121) 0:00:14.997 ******** 2026-01-05 00:45:44.058937 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:45:44.058951 | orchestrator | 2026-01-05 00:45:44.058965 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-05 00:45:44.058980 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.147) 0:00:15.145 ******** 2026-01-05 00:45:44.059020 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:45:44.059034 | orchestrator | 2026-01-05 00:45:44.059047 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-05 00:45:44.059061 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.157) 0:00:15.303 ******** 2026-01-05 00:45:44.059075 | orchestrator | changed: [testbed-manager] 2026-01-05 00:45:44.059088 | orchestrator | 2026-01-05 00:45:44.059102 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-05 00:45:44.059116 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:00.891) 0:00:16.194 ******** 2026-01-05 00:45:44.059130 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-05 00:45:44.059143 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-05 00:45:44.059158 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-05 00:45:44.059171 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-05 00:45:44.059185 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-05 00:45:44.059199 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-05 00:45:44.059212 | orchestrator | 2026-01-05 00:45:44.059225 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-05 00:45:44.059237 | orchestrator | Monday 05 January 2026 00:45:40 +0000 (0:00:02.028) 0:00:18.223 ******** 2026-01-05 00:45:44.059249 | orchestrator | ok: [testbed-manager] 2026-01-05 00:45:44.059262 | orchestrator | 2026-01-05 00:45:44.059275 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-05 00:45:44.059288 | orchestrator | Monday 05 January 2026 00:45:42 +0000 (0:00:01.530) 0:00:19.753 ******** 2026-01-05 00:45:44.059300 | orchestrator | changed: [testbed-manager] 2026-01-05 00:45:44.059312 | orchestrator | 2026-01-05 00:45:44.059325 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:45:44.059338 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:45:44.059351 | orchestrator | 2026-01-05 00:45:44.059364 | orchestrator | 2026-01-05 00:45:44.059394 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:45:44.059406 | orchestrator | Monday 05 January 2026 00:45:43 +0000 (0:00:01.402) 0:00:21.156 ******** 2026-01-05 00:45:44.059418 | orchestrator | =============================================================================== 2026-01-05 00:45:44.059430 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.28s 2026-01-05 00:45:44.059441 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.03s 2026-01-05 00:45:44.059453 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.53s 2026-01-05 00:45:44.059465 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.40s 2026-01-05 00:45:44.059476 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.24s 2026-01-05 00:45:44.059509 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.12s 2026-01-05 00:45:44.059521 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.97s 2026-01-05 00:45:44.059533 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2026-01-05 00:45:44.059544 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.89s 2026-01-05 00:45:44.059556 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-01-05 00:45:44.059568 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-01-05 00:45:44.059580 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-01-05 00:45:44.398159 | orchestrator | 2026-01-05 00:45:44.403884 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jan 5 00:45:44 UTC 2026 2026-01-05 00:45:44.403987 | orchestrator | 2026-01-05 00:45:46.469762 | orchestrator | 2026-01-05 00:45:46 | INFO  | Collection nutshell is prepared for execution 2026-01-05 00:45:46.469890 | orchestrator | 2026-01-05 00:45:46 | INFO  | A [0] - dotfiles 2026-01-05 00:45:56.551944 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [0] - homer 2026-01-05 00:45:56.552083 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [0] - netdata 2026-01-05 00:45:56.552110 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [0] - openstackclient 2026-01-05 00:45:56.552636 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [0] - phpmyadmin 2026-01-05 00:45:56.552681 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [0] - common 2026-01-05 00:45:56.556441 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [1] -- loadbalancer 2026-01-05 00:45:56.556567 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [2] --- opensearch 2026-01-05 00:45:56.556589 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [2] --- mariadb-ng 2026-01-05 00:45:56.556601 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [3] ---- horizon 2026-01-05 00:45:56.556612 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [3] ---- keystone 2026-01-05 00:45:56.557020 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- neutron 2026-01-05 00:45:56.557084 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [5] ------ wait-for-nova 2026-01-05 00:45:56.557091 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [6] ------- octavia 2026-01-05 00:45:56.558777 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- barbican 2026-01-05 00:45:56.558799 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- designate 2026-01-05 00:45:56.558804 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- ironic 2026-01-05 00:45:56.558809 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- placement 2026-01-05 00:45:56.559117 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- magnum 2026-01-05 00:45:56.559462 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [1] -- openvswitch 2026-01-05 00:45:56.559746 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [2] --- ovn 2026-01-05 00:45:56.560060 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [1] -- memcached 2026-01-05 00:45:56.560460 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [1] -- redis 2026-01-05 00:45:56.560487 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [1] -- rabbitmq-ng 2026-01-05 00:45:56.560589 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [0] - kubernetes 2026-01-05 00:45:56.563570 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [1] -- kubeconfig 2026-01-05 00:45:56.563605 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [1] -- copy-kubeconfig 2026-01-05 00:45:56.563791 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [0] - ceph 2026-01-05 00:45:56.565933 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [1] -- ceph-pools 2026-01-05 00:45:56.565958 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [2] --- copy-ceph-keys 2026-01-05 00:45:56.566130 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [3] ---- cephclient 2026-01-05 00:45:56.566143 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-05 00:45:56.566344 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- wait-for-keystone 2026-01-05 00:45:56.566357 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-05 00:45:56.566378 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [5] ------ glance 2026-01-05 00:45:56.566532 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [5] ------ cinder 2026-01-05 00:45:56.566542 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [5] ------ nova 2026-01-05 00:45:56.566836 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [4] ----- prometheus 2026-01-05 00:45:56.566849 | orchestrator | 2026-01-05 00:45:56 | INFO  | A [5] ------ grafana 2026-01-05 00:45:56.826344 | orchestrator | 2026-01-05 00:45:56 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-05 00:45:56.826462 | orchestrator | 2026-01-05 00:45:56 | INFO  | Tasks are running in the background 2026-01-05 00:46:00.347685 | orchestrator | 2026-01-05 00:46:00 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-05 00:46:02.463993 | orchestrator | 2026-01-05 00:46:02 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:02.464132 | orchestrator | 2026-01-05 00:46:02 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:02.464487 | orchestrator | 2026-01-05 00:46:02 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:02.464915 | orchestrator | 2026-01-05 00:46:02 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state STARTED 2026-01-05 00:46:02.465433 | orchestrator | 2026-01-05 00:46:02 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:02.465937 | orchestrator | 2026-01-05 00:46:02 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:02.468055 | orchestrator | 2026-01-05 00:46:02 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:02.468083 | orchestrator | 2026-01-05 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:05.543222 | orchestrator | 2026-01-05 00:46:05 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:05.544376 | orchestrator | 2026-01-05 00:46:05 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:05.544402 | orchestrator | 2026-01-05 00:46:05 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:05.544527 | orchestrator | 2026-01-05 00:46:05 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state STARTED 2026-01-05 00:46:05.545009 | orchestrator | 2026-01-05 00:46:05 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:05.545657 | orchestrator | 2026-01-05 00:46:05 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:05.546101 | orchestrator | 2026-01-05 00:46:05 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:05.546259 | orchestrator | 2026-01-05 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:08.604541 | orchestrator | 2026-01-05 00:46:08 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:08.604791 | orchestrator | 2026-01-05 00:46:08 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:08.605431 | orchestrator | 2026-01-05 00:46:08 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:08.608194 | orchestrator | 2026-01-05 00:46:08 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state STARTED 2026-01-05 00:46:08.609086 | orchestrator | 2026-01-05 00:46:08 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:08.609434 | orchestrator | 2026-01-05 00:46:08 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:08.612714 | orchestrator | 2026-01-05 00:46:08 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:08.612770 | orchestrator | 2026-01-05 00:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:11.668428 | orchestrator | 2026-01-05 00:46:11 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:11.668791 | orchestrator | 2026-01-05 00:46:11 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:11.671033 | orchestrator | 2026-01-05 00:46:11 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:11.671494 | orchestrator | 2026-01-05 00:46:11 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state STARTED 2026-01-05 00:46:11.672391 | orchestrator | 2026-01-05 00:46:11 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:11.673147 | orchestrator | 2026-01-05 00:46:11 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:11.673731 | orchestrator | 2026-01-05 00:46:11 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:11.673866 | orchestrator | 2026-01-05 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:14.739212 | orchestrator | 2026-01-05 00:46:14 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:14.739598 | orchestrator | 2026-01-05 00:46:14 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:14.744702 | orchestrator | 2026-01-05 00:46:14 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:14.745158 | orchestrator | 2026-01-05 00:46:14 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state STARTED 2026-01-05 00:46:14.765471 | orchestrator | 2026-01-05 00:46:14 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:14.765576 | orchestrator | 2026-01-05 00:46:14 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:14.765589 | orchestrator | 2026-01-05 00:46:14 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:14.765600 | orchestrator | 2026-01-05 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:18.103836 | orchestrator | 2026-01-05 00:46:17 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:18.103942 | orchestrator | 2026-01-05 00:46:17 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:18.103948 | orchestrator | 2026-01-05 00:46:17 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:18.103953 | orchestrator | 2026-01-05 00:46:17 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state STARTED 2026-01-05 00:46:18.103957 | orchestrator | 2026-01-05 00:46:17 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:18.103961 | orchestrator | 2026-01-05 00:46:17 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:18.103965 | orchestrator | 2026-01-05 00:46:17 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:18.103973 | orchestrator | 2026-01-05 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:21.019082 | orchestrator | 2026-01-05 00:46:21 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:21.019721 | orchestrator | 2026-01-05 00:46:21 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:21.023514 | orchestrator | 2026-01-05 00:46:21 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:21.024246 | orchestrator | 2026-01-05 00:46:21 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state STARTED 2026-01-05 00:46:21.025958 | orchestrator | 2026-01-05 00:46:21 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:21.028994 | orchestrator | 2026-01-05 00:46:21 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:21.029621 | orchestrator | 2026-01-05 00:46:21 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:21.029638 | orchestrator | 2026-01-05 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:24.105000 | orchestrator | 2026-01-05 00:46:24 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:24.105649 | orchestrator | 2026-01-05 00:46:24 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:24.111892 | orchestrator | 2026-01-05 00:46:24 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:24.117777 | orchestrator | 2026-01-05 00:46:24 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state STARTED 2026-01-05 00:46:24.118808 | orchestrator | 2026-01-05 00:46:24 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:24.121011 | orchestrator | 2026-01-05 00:46:24 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:24.123936 | orchestrator | 2026-01-05 00:46:24 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:24.123984 | orchestrator | 2026-01-05 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:27.428838 | orchestrator | 2026-01-05 00:46:27 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:27.428953 | orchestrator | 2026-01-05 00:46:27 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:27.429146 | orchestrator | 2026-01-05 00:46:27 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:27.430289 | orchestrator | 2026-01-05 00:46:27 | INFO  | Task 662fa67f-f19e-4610-af3d-43e7aca15c5a is in state SUCCESS 2026-01-05 00:46:27.433825 | orchestrator | 2026-01-05 00:46:27.433909 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-05 00:46:27.433928 | orchestrator | 2026-01-05 00:46:27.433941 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-05 00:46:27.433954 | orchestrator | Monday 05 January 2026 00:46:09 +0000 (0:00:00.610) 0:00:00.610 ******** 2026-01-05 00:46:27.433967 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:27.433981 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:27.433994 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:27.434005 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:27.434060 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:46:27.434071 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:46:27.434080 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:46:27.434087 | orchestrator | 2026-01-05 00:46:27.434095 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-05 00:46:27.434103 | orchestrator | Monday 05 January 2026 00:46:14 +0000 (0:00:04.078) 0:00:04.688 ******** 2026-01-05 00:46:27.434111 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:46:27.434119 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:46:27.434127 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:46:27.434134 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:46:27.434142 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:46:27.434176 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:46:27.434189 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:46:27.434228 | orchestrator | 2026-01-05 00:46:27.434241 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-05 00:46:27.434255 | orchestrator | Monday 05 January 2026 00:46:16 +0000 (0:00:02.221) 0:00:06.910 ******** 2026-01-05 00:46:27.434272 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:46:15.009510', 'end': '2026-01-05 00:46:15.019903', 'delta': '0:00:00.010393', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:46:27.434299 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:46:15.031384', 'end': '2026-01-05 00:46:15.037990', 'delta': '0:00:00.006606', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:46:27.434314 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:46:15.033274', 'end': '2026-01-05 00:46:15.039122', 'delta': '0:00:00.005848', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:46:27.434698 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:46:15.176765', 'end': '2026-01-05 00:46:15.186071', 'delta': '0:00:00.009306', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:46:27.434735 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:46:15.036993', 'end': '2026-01-05 00:46:16.043701', 'delta': '0:00:01.006708', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:46:27.434769 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:46:15.386265', 'end': '2026-01-05 00:46:15.394523', 'delta': '0:00:00.008258', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:46:27.434788 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:46:15.721677', 'end': '2026-01-05 00:46:15.727133', 'delta': '0:00:00.005456', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:46:27.434802 | orchestrator | 2026-01-05 00:46:27.434813 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-05 00:46:27.434826 | orchestrator | Monday 05 January 2026 00:46:19 +0000 (0:00:03.120) 0:00:10.031 ******** 2026-01-05 00:46:27.434839 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:46:27.434852 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:46:27.434863 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:46:27.434871 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:46:27.434878 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:46:27.434885 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:46:27.434892 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:46:27.434899 | orchestrator | 2026-01-05 00:46:27.434907 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-05 00:46:27.434914 | orchestrator | Monday 05 January 2026 00:46:21 +0000 (0:00:01.757) 0:00:11.788 ******** 2026-01-05 00:46:27.434922 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:46:27.434929 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:46:27.434936 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:46:27.434944 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:46:27.434954 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:46:27.434965 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:46:27.434977 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:46:27.434988 | orchestrator | 2026-01-05 00:46:27.435000 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:46:27.435037 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:27.435052 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:27.435065 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:27.435078 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:27.435090 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:27.435103 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:27.435115 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:27.435127 | orchestrator | 2026-01-05 00:46:27.435140 | orchestrator | 2026-01-05 00:46:27.435153 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:46:27.435165 | orchestrator | Monday 05 January 2026 00:46:23 +0000 (0:00:02.834) 0:00:14.622 ******** 2026-01-05 00:46:27.435176 | orchestrator | =============================================================================== 2026-01-05 00:46:27.435188 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.08s 2026-01-05 00:46:27.435199 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.12s 2026-01-05 00:46:27.435213 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.83s 2026-01-05 00:46:27.435225 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.22s 2026-01-05 00:46:27.435238 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.76s 2026-01-05 00:46:27.436101 | orchestrator | 2026-01-05 00:46:27 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:27.437529 | orchestrator | 2026-01-05 00:46:27 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:27.440965 | orchestrator | 2026-01-05 00:46:27 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:27.441059 | orchestrator | 2026-01-05 00:46:27 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:27.441078 | orchestrator | 2026-01-05 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:30.509777 | orchestrator | 2026-01-05 00:46:30 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:30.509868 | orchestrator | 2026-01-05 00:46:30 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:30.509878 | orchestrator | 2026-01-05 00:46:30 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:30.509885 | orchestrator | 2026-01-05 00:46:30 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:30.509892 | orchestrator | 2026-01-05 00:46:30 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:30.512838 | orchestrator | 2026-01-05 00:46:30 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:30.516202 | orchestrator | 2026-01-05 00:46:30 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:30.516283 | orchestrator | 2026-01-05 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:33.559401 | orchestrator | 2026-01-05 00:46:33 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:33.559534 | orchestrator | 2026-01-05 00:46:33 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:33.561276 | orchestrator | 2026-01-05 00:46:33 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:33.561704 | orchestrator | 2026-01-05 00:46:33 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:33.562256 | orchestrator | 2026-01-05 00:46:33 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:33.563394 | orchestrator | 2026-01-05 00:46:33 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:33.564555 | orchestrator | 2026-01-05 00:46:33 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:33.564570 | orchestrator | 2026-01-05 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:36.621283 | orchestrator | 2026-01-05 00:46:36 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:36.625257 | orchestrator | 2026-01-05 00:46:36 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:36.625776 | orchestrator | 2026-01-05 00:46:36 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:36.626873 | orchestrator | 2026-01-05 00:46:36 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:36.627274 | orchestrator | 2026-01-05 00:46:36 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:36.632845 | orchestrator | 2026-01-05 00:46:36 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:36.633436 | orchestrator | 2026-01-05 00:46:36 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:36.633478 | orchestrator | 2026-01-05 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:39.665566 | orchestrator | 2026-01-05 00:46:39 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:39.669316 | orchestrator | 2026-01-05 00:46:39 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:39.671868 | orchestrator | 2026-01-05 00:46:39 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:39.673160 | orchestrator | 2026-01-05 00:46:39 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:39.675041 | orchestrator | 2026-01-05 00:46:39 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:39.678188 | orchestrator | 2026-01-05 00:46:39 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:39.680128 | orchestrator | 2026-01-05 00:46:39 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:39.680684 | orchestrator | 2026-01-05 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:42.793012 | orchestrator | 2026-01-05 00:46:42 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:42.793810 | orchestrator | 2026-01-05 00:46:42 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:42.794567 | orchestrator | 2026-01-05 00:46:42 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:42.795543 | orchestrator | 2026-01-05 00:46:42 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:42.796565 | orchestrator | 2026-01-05 00:46:42 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:42.797484 | orchestrator | 2026-01-05 00:46:42 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:42.798271 | orchestrator | 2026-01-05 00:46:42 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:42.798313 | orchestrator | 2026-01-05 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:45.860846 | orchestrator | 2026-01-05 00:46:45 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:45.860955 | orchestrator | 2026-01-05 00:46:45 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:45.860973 | orchestrator | 2026-01-05 00:46:45 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:45.860986 | orchestrator | 2026-01-05 00:46:45 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:45.860999 | orchestrator | 2026-01-05 00:46:45 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:45.861012 | orchestrator | 2026-01-05 00:46:45 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:45.861024 | orchestrator | 2026-01-05 00:46:45 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:45.861037 | orchestrator | 2026-01-05 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:49.065750 | orchestrator | 2026-01-05 00:46:49 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:49.065931 | orchestrator | 2026-01-05 00:46:49 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:49.065945 | orchestrator | 2026-01-05 00:46:49 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:49.065954 | orchestrator | 2026-01-05 00:46:49 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:49.065963 | orchestrator | 2026-01-05 00:46:49 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:49.065972 | orchestrator | 2026-01-05 00:46:49 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:49.065981 | orchestrator | 2026-01-05 00:46:49 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:49.065990 | orchestrator | 2026-01-05 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:52.085814 | orchestrator | 2026-01-05 00:46:52 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:52.103304 | orchestrator | 2026-01-05 00:46:52 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:52.106145 | orchestrator | 2026-01-05 00:46:52 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state STARTED 2026-01-05 00:46:52.109800 | orchestrator | 2026-01-05 00:46:52 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:52.110623 | orchestrator | 2026-01-05 00:46:52 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:52.112010 | orchestrator | 2026-01-05 00:46:52 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:52.114630 | orchestrator | 2026-01-05 00:46:52 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:52.114668 | orchestrator | 2026-01-05 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:55.154657 | orchestrator | 2026-01-05 00:46:55 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state STARTED 2026-01-05 00:46:55.156617 | orchestrator | 2026-01-05 00:46:55 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:55.156659 | orchestrator | 2026-01-05 00:46:55 | INFO  | Task 82cae445-a5df-4fc0-9bff-c0f024c84622 is in state SUCCESS 2026-01-05 00:46:55.166111 | orchestrator | 2026-01-05 00:46:55 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:55.166238 | orchestrator | 2026-01-05 00:46:55 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:55.166874 | orchestrator | 2026-01-05 00:46:55 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:55.167153 | orchestrator | 2026-01-05 00:46:55 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:55.167178 | orchestrator | 2026-01-05 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:58.235213 | orchestrator | 2026-01-05 00:46:58 | INFO  | Task fa794f95-8e0e-4caa-ae03-e507fc786f19 is in state SUCCESS 2026-01-05 00:46:58.235386 | orchestrator | 2026-01-05 00:46:58 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:46:58.235730 | orchestrator | 2026-01-05 00:46:58 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:46:58.236005 | orchestrator | 2026-01-05 00:46:58 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:46:58.239000 | orchestrator | 2026-01-05 00:46:58 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:46:58.239896 | orchestrator | 2026-01-05 00:46:58 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:46:58.239958 | orchestrator | 2026-01-05 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:01.301576 | orchestrator | 2026-01-05 00:47:01 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:01.303833 | orchestrator | 2026-01-05 00:47:01 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:01.305541 | orchestrator | 2026-01-05 00:47:01 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:01.308669 | orchestrator | 2026-01-05 00:47:01 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:01.317002 | orchestrator | 2026-01-05 00:47:01 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:01.317083 | orchestrator | 2026-01-05 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:04.389782 | orchestrator | 2026-01-05 00:47:04 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:04.389894 | orchestrator | 2026-01-05 00:47:04 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:04.390383 | orchestrator | 2026-01-05 00:47:04 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:04.391702 | orchestrator | 2026-01-05 00:47:04 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:04.392820 | orchestrator | 2026-01-05 00:47:04 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:04.392841 | orchestrator | 2026-01-05 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:07.498603 | orchestrator | 2026-01-05 00:47:07 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:07.498720 | orchestrator | 2026-01-05 00:47:07 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:07.498730 | orchestrator | 2026-01-05 00:47:07 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:07.499518 | orchestrator | 2026-01-05 00:47:07 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:07.499984 | orchestrator | 2026-01-05 00:47:07 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:07.500021 | orchestrator | 2026-01-05 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:10.561212 | orchestrator | 2026-01-05 00:47:10 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:10.561381 | orchestrator | 2026-01-05 00:47:10 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:10.561399 | orchestrator | 2026-01-05 00:47:10 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:10.561409 | orchestrator | 2026-01-05 00:47:10 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:10.561420 | orchestrator | 2026-01-05 00:47:10 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:10.561431 | orchestrator | 2026-01-05 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:13.640869 | orchestrator | 2026-01-05 00:47:13 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:13.641691 | orchestrator | 2026-01-05 00:47:13 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:13.644387 | orchestrator | 2026-01-05 00:47:13 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:13.646729 | orchestrator | 2026-01-05 00:47:13 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:13.647616 | orchestrator | 2026-01-05 00:47:13 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:13.647680 | orchestrator | 2026-01-05 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:16.692141 | orchestrator | 2026-01-05 00:47:16 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:16.692253 | orchestrator | 2026-01-05 00:47:16 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:16.697142 | orchestrator | 2026-01-05 00:47:16 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:16.697385 | orchestrator | 2026-01-05 00:47:16 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:16.697961 | orchestrator | 2026-01-05 00:47:16 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:16.698723 | orchestrator | 2026-01-05 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:19.887270 | orchestrator | 2026-01-05 00:47:19 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:19.887381 | orchestrator | 2026-01-05 00:47:19 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:19.888406 | orchestrator | 2026-01-05 00:47:19 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:19.888755 | orchestrator | 2026-01-05 00:47:19 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:19.889626 | orchestrator | 2026-01-05 00:47:19 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:19.889823 | orchestrator | 2026-01-05 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:22.946468 | orchestrator | 2026-01-05 00:47:22 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:22.947106 | orchestrator | 2026-01-05 00:47:22 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:22.947468 | orchestrator | 2026-01-05 00:47:22 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:22.949959 | orchestrator | 2026-01-05 00:47:22 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:22.950764 | orchestrator | 2026-01-05 00:47:22 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:22.950806 | orchestrator | 2026-01-05 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:26.020517 | orchestrator | 2026-01-05 00:47:26 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:26.024953 | orchestrator | 2026-01-05 00:47:26 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:26.025026 | orchestrator | 2026-01-05 00:47:26 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:26.025032 | orchestrator | 2026-01-05 00:47:26 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:26.042353 | orchestrator | 2026-01-05 00:47:26 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:26.042461 | orchestrator | 2026-01-05 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:29.118687 | orchestrator | 2026-01-05 00:47:29 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:29.120477 | orchestrator | 2026-01-05 00:47:29 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:29.122175 | orchestrator | 2026-01-05 00:47:29 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:29.123738 | orchestrator | 2026-01-05 00:47:29 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:29.125433 | orchestrator | 2026-01-05 00:47:29 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:29.125478 | orchestrator | 2026-01-05 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:32.198182 | orchestrator | 2026-01-05 00:47:32 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:32.208158 | orchestrator | 2026-01-05 00:47:32 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:32.210993 | orchestrator | 2026-01-05 00:47:32 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:32.218426 | orchestrator | 2026-01-05 00:47:32 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:32.218469 | orchestrator | 2026-01-05 00:47:32 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:32.218478 | orchestrator | 2026-01-05 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:35.266973 | orchestrator | 2026-01-05 00:47:35 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:35.267085 | orchestrator | 2026-01-05 00:47:35 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:35.268953 | orchestrator | 2026-01-05 00:47:35 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state STARTED 2026-01-05 00:47:35.273001 | orchestrator | 2026-01-05 00:47:35 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:35.273844 | orchestrator | 2026-01-05 00:47:35 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state STARTED 2026-01-05 00:47:35.273892 | orchestrator | 2026-01-05 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:38.328822 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:38.330124 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:38.330660 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 4edab064-1954-4c87-b496-6c8cd9191149 is in state SUCCESS 2026-01-05 00:47:38.331177 | orchestrator | 2026-01-05 00:47:38.331213 | orchestrator | 2026-01-05 00:47:38.331223 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-05 00:47:38.331231 | orchestrator | 2026-01-05 00:47:38.331238 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-05 00:47:38.331244 | orchestrator | Monday 05 January 2026 00:46:11 +0000 (0:00:00.764) 0:00:00.764 ******** 2026-01-05 00:47:38.331250 | orchestrator | ok: [testbed-manager] => { 2026-01-05 00:47:38.331304 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-05 00:47:38.331314 | orchestrator | } 2026-01-05 00:47:38.331320 | orchestrator | 2026-01-05 00:47:38.331327 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-05 00:47:38.331334 | orchestrator | Monday 05 January 2026 00:46:12 +0000 (0:00:00.406) 0:00:01.170 ******** 2026-01-05 00:47:38.331340 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.331348 | orchestrator | 2026-01-05 00:47:38.331353 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-05 00:47:38.331359 | orchestrator | Monday 05 January 2026 00:46:14 +0000 (0:00:02.596) 0:00:03.767 ******** 2026-01-05 00:47:38.331366 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-05 00:47:38.331373 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-05 00:47:38.331379 | orchestrator | 2026-01-05 00:47:38.331385 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-05 00:47:38.331391 | orchestrator | Monday 05 January 2026 00:46:16 +0000 (0:00:01.513) 0:00:05.281 ******** 2026-01-05 00:47:38.331397 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.331403 | orchestrator | 2026-01-05 00:47:38.331409 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-05 00:47:38.331416 | orchestrator | Monday 05 January 2026 00:46:19 +0000 (0:00:03.450) 0:00:08.732 ******** 2026-01-05 00:47:38.331423 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.331427 | orchestrator | 2026-01-05 00:47:38.331431 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-05 00:47:38.331435 | orchestrator | Monday 05 January 2026 00:46:21 +0000 (0:00:01.955) 0:00:10.687 ******** 2026-01-05 00:47:38.331439 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-05 00:47:38.331444 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.331447 | orchestrator | 2026-01-05 00:47:38.331452 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-05 00:47:38.331456 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:27.268) 0:00:37.956 ******** 2026-01-05 00:47:38.331460 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.331464 | orchestrator | 2026-01-05 00:47:38.331468 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:47:38.331472 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.331477 | orchestrator | 2026-01-05 00:47:38.331481 | orchestrator | 2026-01-05 00:47:38.331485 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:47:38.331510 | orchestrator | Monday 05 January 2026 00:46:52 +0000 (0:00:03.422) 0:00:41.379 ******** 2026-01-05 00:47:38.331516 | orchestrator | =============================================================================== 2026-01-05 00:47:38.331521 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.27s 2026-01-05 00:47:38.331544 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.45s 2026-01-05 00:47:38.331550 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.42s 2026-01-05 00:47:38.331556 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.60s 2026-01-05 00:47:38.331561 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.96s 2026-01-05 00:47:38.331567 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.51s 2026-01-05 00:47:38.331572 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.41s 2026-01-05 00:47:38.331578 | orchestrator | 2026-01-05 00:47:38.331584 | orchestrator | 2026-01-05 00:47:38.331590 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-05 00:47:38.331595 | orchestrator | 2026-01-05 00:47:38.331601 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-05 00:47:38.331607 | orchestrator | Monday 05 January 2026 00:46:10 +0000 (0:00:00.882) 0:00:00.882 ******** 2026-01-05 00:47:38.331614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-05 00:47:38.331622 | orchestrator | 2026-01-05 00:47:38.331628 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-05 00:47:38.331633 | orchestrator | Monday 05 January 2026 00:46:11 +0000 (0:00:00.757) 0:00:01.639 ******** 2026-01-05 00:47:38.331639 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-05 00:47:38.331645 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-05 00:47:38.331651 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-05 00:47:38.331657 | orchestrator | 2026-01-05 00:47:38.331664 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-05 00:47:38.331670 | orchestrator | Monday 05 January 2026 00:46:13 +0000 (0:00:02.276) 0:00:03.916 ******** 2026-01-05 00:47:38.331676 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.331682 | orchestrator | 2026-01-05 00:47:38.331686 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-05 00:47:38.331690 | orchestrator | Monday 05 January 2026 00:46:15 +0000 (0:00:02.198) 0:00:06.115 ******** 2026-01-05 00:47:38.331705 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-05 00:47:38.331709 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.331713 | orchestrator | 2026-01-05 00:47:38.331717 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-05 00:47:38.331720 | orchestrator | Monday 05 January 2026 00:46:46 +0000 (0:00:31.074) 0:00:37.189 ******** 2026-01-05 00:47:38.331726 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.331732 | orchestrator | 2026-01-05 00:47:38.331738 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-05 00:47:38.331744 | orchestrator | Monday 05 January 2026 00:46:50 +0000 (0:00:03.533) 0:00:40.722 ******** 2026-01-05 00:47:38.331749 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.331757 | orchestrator | 2026-01-05 00:47:38.331765 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-05 00:47:38.331772 | orchestrator | Monday 05 January 2026 00:46:51 +0000 (0:00:01.078) 0:00:41.801 ******** 2026-01-05 00:47:38.331777 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.331783 | orchestrator | 2026-01-05 00:47:38.331790 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-05 00:47:38.331815 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:01.953) 0:00:43.754 ******** 2026-01-05 00:47:38.331821 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.331827 | orchestrator | 2026-01-05 00:47:38.331832 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-05 00:47:38.331839 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.866) 0:00:44.621 ******** 2026-01-05 00:47:38.331845 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.331850 | orchestrator | 2026-01-05 00:47:38.331856 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-05 00:47:38.331863 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.541) 0:00:45.163 ******** 2026-01-05 00:47:38.331869 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.331875 | orchestrator | 2026-01-05 00:47:38.331882 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:47:38.331888 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.331895 | orchestrator | 2026-01-05 00:47:38.331901 | orchestrator | 2026-01-05 00:47:38.331908 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:47:38.331913 | orchestrator | Monday 05 January 2026 00:46:55 +0000 (0:00:00.405) 0:00:45.569 ******** 2026-01-05 00:47:38.331919 | orchestrator | =============================================================================== 2026-01-05 00:47:38.331925 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 31.07s 2026-01-05 00:47:38.331931 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.53s 2026-01-05 00:47:38.331938 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.28s 2026-01-05 00:47:38.332053 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.20s 2026-01-05 00:47:38.332058 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.95s 2026-01-05 00:47:38.332062 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.08s 2026-01-05 00:47:38.332066 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.87s 2026-01-05 00:47:38.332070 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.76s 2026-01-05 00:47:38.332080 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.54s 2026-01-05 00:47:38.332084 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.41s 2026-01-05 00:47:38.332088 | orchestrator | 2026-01-05 00:47:38.332092 | orchestrator | 2026-01-05 00:47:38.332096 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-05 00:47:38.332100 | orchestrator | 2026-01-05 00:47:38.332103 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-05 00:47:38.332107 | orchestrator | Monday 05 January 2026 00:46:30 +0000 (0:00:00.219) 0:00:00.219 ******** 2026-01-05 00:47:38.332111 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.332115 | orchestrator | 2026-01-05 00:47:38.332118 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-05 00:47:38.332122 | orchestrator | Monday 05 January 2026 00:46:31 +0000 (0:00:00.900) 0:00:01.119 ******** 2026-01-05 00:47:38.332126 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-05 00:47:38.332130 | orchestrator | 2026-01-05 00:47:38.332133 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-05 00:47:38.332137 | orchestrator | Monday 05 January 2026 00:46:32 +0000 (0:00:00.510) 0:00:01.629 ******** 2026-01-05 00:47:38.332141 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.332145 | orchestrator | 2026-01-05 00:47:38.332148 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-05 00:47:38.332152 | orchestrator | Monday 05 January 2026 00:46:33 +0000 (0:00:01.716) 0:00:03.346 ******** 2026-01-05 00:47:38.332156 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-05 00:47:38.332167 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.332171 | orchestrator | 2026-01-05 00:47:38.332175 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-05 00:47:38.332178 | orchestrator | Monday 05 January 2026 00:47:31 +0000 (0:00:58.069) 0:01:01.416 ******** 2026-01-05 00:47:38.332182 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.332186 | orchestrator | 2026-01-05 00:47:38.332190 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:47:38.332193 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.332197 | orchestrator | 2026-01-05 00:47:38.332201 | orchestrator | 2026-01-05 00:47:38.332205 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:47:38.332216 | orchestrator | Monday 05 January 2026 00:47:36 +0000 (0:00:04.895) 0:01:06.311 ******** 2026-01-05 00:47:38.332220 | orchestrator | =============================================================================== 2026-01-05 00:47:38.332223 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.07s 2026-01-05 00:47:38.332227 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.90s 2026-01-05 00:47:38.332231 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.72s 2026-01-05 00:47:38.332235 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.90s 2026-01-05 00:47:38.332238 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.51s 2026-01-05 00:47:38.332242 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:38.333811 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 166df0fb-5ee5-48ad-908c-fda610fe6801 is in state SUCCESS 2026-01-05 00:47:38.334425 | orchestrator | 2026-01-05 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:38.335012 | orchestrator | 2026-01-05 00:47:38.335032 | orchestrator | 2026-01-05 00:47:38.335039 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:47:38.335046 | orchestrator | 2026-01-05 00:47:38.335052 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:47:38.335058 | orchestrator | Monday 05 January 2026 00:46:09 +0000 (0:00:00.356) 0:00:00.356 ******** 2026-01-05 00:47:38.335065 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-05 00:47:38.335069 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-05 00:47:38.335073 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-05 00:47:38.335077 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-05 00:47:38.335081 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-05 00:47:38.335085 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-05 00:47:38.335088 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-05 00:47:38.335092 | orchestrator | 2026-01-05 00:47:38.335096 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-05 00:47:38.335100 | orchestrator | 2026-01-05 00:47:38.335104 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-05 00:47:38.335110 | orchestrator | Monday 05 January 2026 00:46:11 +0000 (0:00:02.419) 0:00:02.776 ******** 2026-01-05 00:47:38.335130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:47:38.335143 | orchestrator | 2026-01-05 00:47:38.335149 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-05 00:47:38.335156 | orchestrator | Monday 05 January 2026 00:46:13 +0000 (0:00:01.446) 0:00:04.223 ******** 2026-01-05 00:47:38.335175 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:47:38.335183 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.335189 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:47:38.335195 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:47:38.335201 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:38.335213 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:38.335217 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:38.335221 | orchestrator | 2026-01-05 00:47:38.335225 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-05 00:47:38.335229 | orchestrator | Monday 05 January 2026 00:46:16 +0000 (0:00:03.125) 0:00:07.349 ******** 2026-01-05 00:47:38.335232 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:47:38.335236 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.335240 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:47:38.335243 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:47:38.335247 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:38.335251 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:38.335254 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:38.335258 | orchestrator | 2026-01-05 00:47:38.335287 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-05 00:47:38.335291 | orchestrator | Monday 05 January 2026 00:46:20 +0000 (0:00:03.728) 0:00:11.077 ******** 2026-01-05 00:47:38.335295 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:38.335299 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:38.335302 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:38.335306 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.335310 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:38.335315 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:38.335318 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:38.335322 | orchestrator | 2026-01-05 00:47:38.335326 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-05 00:47:38.335330 | orchestrator | Monday 05 January 2026 00:46:24 +0000 (0:00:04.104) 0:00:15.182 ******** 2026-01-05 00:47:38.335333 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:38.335337 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:38.335341 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:38.335344 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:38.335348 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:38.335352 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:38.335356 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.335359 | orchestrator | 2026-01-05 00:47:38.335363 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-05 00:47:38.335367 | orchestrator | Monday 05 January 2026 00:46:36 +0000 (0:00:11.692) 0:00:26.875 ******** 2026-01-05 00:47:38.335371 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:38.335374 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:38.335378 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:38.335382 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:38.335385 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:38.335389 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:38.335393 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.335396 | orchestrator | 2026-01-05 00:47:38.335400 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-05 00:47:38.335404 | orchestrator | Monday 05 January 2026 00:47:14 +0000 (0:00:37.972) 0:01:04.847 ******** 2026-01-05 00:47:38.335408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:47:38.335415 | orchestrator | 2026-01-05 00:47:38.335418 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-05 00:47:38.335422 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:01.444) 0:01:06.292 ******** 2026-01-05 00:47:38.335426 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-05 00:47:38.335435 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-05 00:47:38.335439 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-05 00:47:38.335443 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-05 00:47:38.335453 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-05 00:47:38.335457 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-05 00:47:38.335460 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-05 00:47:38.335464 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-05 00:47:38.335468 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-05 00:47:38.335472 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-05 00:47:38.335475 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-05 00:47:38.335479 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-05 00:47:38.335483 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-05 00:47:38.335486 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-05 00:47:38.335490 | orchestrator | 2026-01-05 00:47:38.335494 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-05 00:47:38.335499 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:04.347) 0:01:10.640 ******** 2026-01-05 00:47:38.335503 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.335507 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:47:38.335510 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:47:38.335514 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:47:38.335518 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:38.335522 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:38.335525 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:38.335529 | orchestrator | 2026-01-05 00:47:38.335533 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-05 00:47:38.335537 | orchestrator | Monday 05 January 2026 00:47:20 +0000 (0:00:01.134) 0:01:11.774 ******** 2026-01-05 00:47:38.335540 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:38.335544 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:38.335548 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.335552 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:38.335555 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:38.335559 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:38.335563 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:38.335566 | orchestrator | 2026-01-05 00:47:38.335570 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-05 00:47:38.335577 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:01.453) 0:01:13.228 ******** 2026-01-05 00:47:38.335581 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:47:38.335585 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:38.335588 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:47:38.335592 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:47:38.335596 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.335599 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:38.335603 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:38.335607 | orchestrator | 2026-01-05 00:47:38.335611 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-05 00:47:38.335614 | orchestrator | Monday 05 January 2026 00:47:24 +0000 (0:00:01.833) 0:01:15.061 ******** 2026-01-05 00:47:38.335618 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:38.335622 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:47:38.335626 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:47:38.335629 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:47:38.335633 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:38.335637 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:38.335642 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:38.335646 | orchestrator | 2026-01-05 00:47:38.335650 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-05 00:47:38.335655 | orchestrator | Monday 05 January 2026 00:47:27 +0000 (0:00:03.488) 0:01:18.549 ******** 2026-01-05 00:47:38.335662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-05 00:47:38.335670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:47:38.335674 | orchestrator | 2026-01-05 00:47:38.335678 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-05 00:47:38.335683 | orchestrator | Monday 05 January 2026 00:47:29 +0000 (0:00:01.706) 0:01:20.256 ******** 2026-01-05 00:47:38.335687 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.335692 | orchestrator | 2026-01-05 00:47:38.335696 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-05 00:47:38.335701 | orchestrator | Monday 05 January 2026 00:47:31 +0000 (0:00:02.336) 0:01:22.592 ******** 2026-01-05 00:47:38.335705 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:38.335709 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:38.335714 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:38.335718 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:38.335722 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:38.335726 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:38.335730 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:38.335733 | orchestrator | 2026-01-05 00:47:38.335737 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:47:38.335741 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.335746 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.335749 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.335753 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.335759 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.335763 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.335767 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:47:38.335771 | orchestrator | 2026-01-05 00:47:38.335774 | orchestrator | 2026-01-05 00:47:38.335778 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:47:38.335782 | orchestrator | Monday 05 January 2026 00:47:35 +0000 (0:00:03.454) 0:01:26.047 ******** 2026-01-05 00:47:38.335786 | orchestrator | =============================================================================== 2026-01-05 00:47:38.335790 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 37.97s 2026-01-05 00:47:38.335793 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.69s 2026-01-05 00:47:38.335797 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.35s 2026-01-05 00:47:38.335801 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.10s 2026-01-05 00:47:38.335805 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.73s 2026-01-05 00:47:38.335809 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.49s 2026-01-05 00:47:38.335812 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.45s 2026-01-05 00:47:38.335816 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.12s 2026-01-05 00:47:38.335823 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.42s 2026-01-05 00:47:38.335828 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.34s 2026-01-05 00:47:38.335833 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.83s 2026-01-05 00:47:38.335842 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.71s 2026-01-05 00:47:38.335848 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.45s 2026-01-05 00:47:38.335854 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.45s 2026-01-05 00:47:38.335860 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.44s 2026-01-05 00:47:38.335866 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.13s 2026-01-05 00:47:41.373427 | orchestrator | 2026-01-05 00:47:41 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:41.374332 | orchestrator | 2026-01-05 00:47:41 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:41.376305 | orchestrator | 2026-01-05 00:47:41 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:41.376375 | orchestrator | 2026-01-05 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:44.418082 | orchestrator | 2026-01-05 00:47:44 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:44.418200 | orchestrator | 2026-01-05 00:47:44 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:44.420248 | orchestrator | 2026-01-05 00:47:44 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:44.420411 | orchestrator | 2026-01-05 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:47.487691 | orchestrator | 2026-01-05 00:47:47 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:47.490339 | orchestrator | 2026-01-05 00:47:47 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:47.493915 | orchestrator | 2026-01-05 00:47:47 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:47.494420 | orchestrator | 2026-01-05 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:50.572603 | orchestrator | 2026-01-05 00:47:50 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:50.576459 | orchestrator | 2026-01-05 00:47:50 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:50.582230 | orchestrator | 2026-01-05 00:47:50 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:50.582314 | orchestrator | 2026-01-05 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:53.689077 | orchestrator | 2026-01-05 00:47:53 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:53.692243 | orchestrator | 2026-01-05 00:47:53 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:53.693987 | orchestrator | 2026-01-05 00:47:53 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:53.694041 | orchestrator | 2026-01-05 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:56.730789 | orchestrator | 2026-01-05 00:47:56 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:56.735060 | orchestrator | 2026-01-05 00:47:56 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:56.737005 | orchestrator | 2026-01-05 00:47:56 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:56.737045 | orchestrator | 2026-01-05 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:59.782753 | orchestrator | 2026-01-05 00:47:59 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:47:59.787092 | orchestrator | 2026-01-05 00:47:59 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:47:59.787438 | orchestrator | 2026-01-05 00:47:59 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:47:59.789230 | orchestrator | 2026-01-05 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:02.845883 | orchestrator | 2026-01-05 00:48:02 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:02.848299 | orchestrator | 2026-01-05 00:48:02 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:02.850336 | orchestrator | 2026-01-05 00:48:02 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:02.850402 | orchestrator | 2026-01-05 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:05.899139 | orchestrator | 2026-01-05 00:48:05 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:05.899198 | orchestrator | 2026-01-05 00:48:05 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:05.899212 | orchestrator | 2026-01-05 00:48:05 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:05.899217 | orchestrator | 2026-01-05 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:08.974481 | orchestrator | 2026-01-05 00:48:08 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:08.976827 | orchestrator | 2026-01-05 00:48:08 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:08.978542 | orchestrator | 2026-01-05 00:48:08 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:08.980068 | orchestrator | [32m2026-01-05 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:12.031640 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:12.032782 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:12.033767 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:12.033817 | orchestrator | 2026-01-05 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:15.066579 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:15.069515 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:15.072306 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:15.072464 | orchestrator | 2026-01-05 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:18.115037 | orchestrator | 2026-01-05 00:48:18 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:18.116715 | orchestrator | 2026-01-05 00:48:18 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:18.118339 | orchestrator | 2026-01-05 00:48:18 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:18.118500 | orchestrator | 2026-01-05 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:21.177022 | orchestrator | 2026-01-05 00:48:21 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:21.179669 | orchestrator | 2026-01-05 00:48:21 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:21.181156 | orchestrator | 2026-01-05 00:48:21 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:21.181206 | orchestrator | 2026-01-05 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:24.221660 | orchestrator | 2026-01-05 00:48:24 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:24.224049 | orchestrator | 2026-01-05 00:48:24 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:24.226413 | orchestrator | 2026-01-05 00:48:24 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:24.226489 | orchestrator | 2026-01-05 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:27.276518 | orchestrator | 2026-01-05 00:48:27 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:27.277265 | orchestrator | 2026-01-05 00:48:27 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:27.278688 | orchestrator | 2026-01-05 00:48:27 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:27.278733 | orchestrator | 2026-01-05 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:30.337351 | orchestrator | 2026-01-05 00:48:30 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:30.340302 | orchestrator | 2026-01-05 00:48:30 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:30.343373 | orchestrator | 2026-01-05 00:48:30 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:30.343458 | orchestrator | 2026-01-05 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:33.381917 | orchestrator | 2026-01-05 00:48:33 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:33.381992 | orchestrator | 2026-01-05 00:48:33 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:33.383152 | orchestrator | 2026-01-05 00:48:33 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:33.383260 | orchestrator | 2026-01-05 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:36.427549 | orchestrator | 2026-01-05 00:48:36 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:36.429996 | orchestrator | 2026-01-05 00:48:36 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state STARTED 2026-01-05 00:48:36.432579 | orchestrator | 2026-01-05 00:48:36 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:36.432623 | orchestrator | 2026-01-05 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:39.480518 | orchestrator | 2026-01-05 00:48:39 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:39.484064 | orchestrator | 2026-01-05 00:48:39 | INFO  | Task 503d9764-7c8b-4cd9-b490-3be1d9bb93d1 is in state SUCCESS 2026-01-05 00:48:39.485853 | orchestrator | 2026-01-05 00:48:39.485898 | orchestrator | 2026-01-05 00:48:39.485909 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-05 00:48:39.485941 | orchestrator | 2026-01-05 00:48:39.485951 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 00:48:39.485961 | orchestrator | Monday 05 January 2026 00:46:02 +0000 (0:00:00.251) 0:00:00.251 ******** 2026-01-05 00:48:39.485972 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:48:39.485984 | orchestrator | 2026-01-05 00:48:39.485994 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-05 00:48:39.486004 | orchestrator | Monday 05 January 2026 00:46:03 +0000 (0:00:01.400) 0:00:01.652 ******** 2026-01-05 00:48:39.486112 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:48:39.486129 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:48:39.486139 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:48:39.486149 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:48:39.486159 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:48:39.486168 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:48:39.486179 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:48:39.486189 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:48:39.486199 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:48:39.486229 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:48:39.486239 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:48:39.486248 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:48:39.486258 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:48:39.486267 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:48:39.486277 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:48:39.486287 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:48:39.486296 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:48:39.486306 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:48:39.486315 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:48:39.486325 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:48:39.486335 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:48:39.486344 | orchestrator | 2026-01-05 00:48:39.486354 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 00:48:39.486364 | orchestrator | Monday 05 January 2026 00:46:08 +0000 (0:00:04.685) 0:00:06.337 ******** 2026-01-05 00:48:39.486374 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:48:39.486385 | orchestrator | 2026-01-05 00:48:39.486395 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-05 00:48:39.486406 | orchestrator | Monday 05 January 2026 00:46:09 +0000 (0:00:01.458) 0:00:07.796 ******** 2026-01-05 00:48:39.486431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.486563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.486598 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.486612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.486625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.486665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.486704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.486717 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486823 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.486864 | orchestrator | 2026-01-05 00:48:39.486873 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-05 00:48:39.486883 | orchestrator | Monday 05 January 2026 00:46:15 +0000 (0:00:06.143) 0:00:13.940 ******** 2026-01-05 00:48:39.486894 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.486904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.486924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.486935 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.486950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.486985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.486997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487049 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487059 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:39.487070 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:39.487079 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:48:39.487090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487164 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:39.487174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487227 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:39.487238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487248 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:39.487258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487302 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:39.487312 | orchestrator | 2026-01-05 00:48:39.487322 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-05 00:48:39.487332 | orchestrator | Monday 05 January 2026 00:46:19 +0000 (0:00:03.875) 0:00:17.815 ******** 2026-01-05 00:48:39.487342 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487352 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487404 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:48:39.487415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487442 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:39.487452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.487488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.487513 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:39.487523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.488047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.488081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.488099 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:39.488116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.488152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.488170 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:39.488188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.488240 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:39.488253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.488271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.488281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.488291 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:39.488301 | orchestrator | 2026-01-05 00:48:39.488311 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-05 00:48:39.488321 | orchestrator | Monday 05 January 2026 00:46:26 +0000 (0:00:06.376) 0:00:24.192 ******** 2026-01-05 00:48:39.488332 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:48:39.488348 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:39.488364 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:39.488379 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:39.488394 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:39.488422 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:39.488439 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:39.488456 | orchestrator | 2026-01-05 00:48:39.488473 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-05 00:48:39.488489 | orchestrator | Monday 05 January 2026 00:46:27 +0000 (0:00:01.111) 0:00:25.303 ******** 2026-01-05 00:48:39.488506 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:48:39.488523 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:39.488553 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:39.488570 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:39.488586 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:39.488603 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:39.488619 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:39.488637 | orchestrator | 2026-01-05 00:48:39.488655 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-05 00:48:39.488673 | orchestrator | Monday 05 January 2026 00:46:28 +0000 (0:00:01.332) 0:00:26.636 ******** 2026-01-05 00:48:39.488691 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:48:39.488708 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:39.488725 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:39.488742 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:39.488754 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:39.488767 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:39.488784 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:39.488800 | orchestrator | 2026-01-05 00:48:39.488817 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-05 00:48:39.488836 | orchestrator | Monday 05 January 2026 00:46:29 +0000 (0:00:01.007) 0:00:27.644 ******** 2026-01-05 00:48:39.488852 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:39.488871 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:39.488887 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:39.488904 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:39.488916 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:39.488933 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:39.488950 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:39.488966 | orchestrator | 2026-01-05 00:48:39.488985 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-05 00:48:39.489001 | orchestrator | Monday 05 January 2026 00:46:32 +0000 (0:00:02.759) 0:00:30.403 ******** 2026-01-05 00:48:39.489018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.489031 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.489047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.489063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.489094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.489122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.489134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.489155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489251 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489277 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489320 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489360 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.489392 | orchestrator | 2026-01-05 00:48:39.489402 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-05 00:48:39.489412 | orchestrator | Monday 05 January 2026 00:46:37 +0000 (0:00:04.969) 0:00:35.372 ******** 2026-01-05 00:48:39.489422 | orchestrator | [WARNING]: Skipped 2026-01-05 00:48:39.489434 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-05 00:48:39.489444 | orchestrator | to this access issue: 2026-01-05 00:48:39.489454 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-05 00:48:39.489464 | orchestrator | directory 2026-01-05 00:48:39.489474 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:48:39.489483 | orchestrator | 2026-01-05 00:48:39.489494 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-05 00:48:39.489510 | orchestrator | Monday 05 January 2026 00:46:38 +0000 (0:00:00.972) 0:00:36.344 ******** 2026-01-05 00:48:39.489527 | orchestrator | [WARNING]: Skipped 2026-01-05 00:48:39.489544 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-05 00:48:39.489561 | orchestrator | to this access issue: 2026-01-05 00:48:39.489577 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-05 00:48:39.489591 | orchestrator | directory 2026-01-05 00:48:39.489601 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:48:39.489611 | orchestrator | 2026-01-05 00:48:39.489621 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-05 00:48:39.489630 | orchestrator | Monday 05 January 2026 00:46:39 +0000 (0:00:01.029) 0:00:37.374 ******** 2026-01-05 00:48:39.489640 | orchestrator | [WARNING]: Skipped 2026-01-05 00:48:39.489649 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-05 00:48:39.489659 | orchestrator | to this access issue: 2026-01-05 00:48:39.489668 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-05 00:48:39.489678 | orchestrator | directory 2026-01-05 00:48:39.489687 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:48:39.489697 | orchestrator | 2026-01-05 00:48:39.489707 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-05 00:48:39.489716 | orchestrator | Monday 05 January 2026 00:46:40 +0000 (0:00:00.790) 0:00:38.164 ******** 2026-01-05 00:48:39.489730 | orchestrator | [WARNING]: Skipped 2026-01-05 00:48:39.489746 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-05 00:48:39.489762 | orchestrator | to this access issue: 2026-01-05 00:48:39.489779 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-05 00:48:39.489795 | orchestrator | directory 2026-01-05 00:48:39.489810 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:48:39.489826 | orchestrator | 2026-01-05 00:48:39.489836 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-05 00:48:39.489846 | orchestrator | Monday 05 January 2026 00:46:42 +0000 (0:00:02.056) 0:00:40.221 ******** 2026-01-05 00:48:39.489855 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:39.489865 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:39.489874 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:39.489884 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:39.489898 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:39.489914 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:39.489930 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:39.489946 | orchestrator | 2026-01-05 00:48:39.489963 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-05 00:48:39.489979 | orchestrator | Monday 05 January 2026 00:46:47 +0000 (0:00:05.037) 0:00:45.259 ******** 2026-01-05 00:48:39.489993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:48:39.490003 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:48:39.490060 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:48:39.490073 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:48:39.490088 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:48:39.490099 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:48:39.490108 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:48:39.490118 | orchestrator | 2026-01-05 00:48:39.490127 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-05 00:48:39.490142 | orchestrator | Monday 05 January 2026 00:46:51 +0000 (0:00:04.030) 0:00:49.290 ******** 2026-01-05 00:48:39.490158 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:39.490176 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:39.490193 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:39.490230 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:39.490248 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:39.490263 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:39.490279 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:39.490291 | orchestrator | 2026-01-05 00:48:39.490300 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-05 00:48:39.490310 | orchestrator | Monday 05 January 2026 00:46:55 +0000 (0:00:03.878) 0:00:53.168 ******** 2026-01-05 00:48:39.490333 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.490344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.490362 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.490373 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.490383 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.490394 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.490404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.490424 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.490451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.490469 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.490495 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.490512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.490522 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.490533 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.490549 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.490574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.490592 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.490625 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.490641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.490651 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.490661 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.490671 | orchestrator | 2026-01-05 00:48:39.490681 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-05 00:48:39.490692 | orchestrator | Monday 05 January 2026 00:46:57 +0000 (0:00:02.392) 0:00:55.561 ******** 2026-01-05 00:48:39.490708 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:48:39.490725 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:48:39.490741 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:48:39.490758 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:48:39.490780 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:48:39.490793 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:48:39.490803 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:48:39.490813 | orchestrator | 2026-01-05 00:48:39.490822 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-05 00:48:39.490836 | orchestrator | Monday 05 January 2026 00:47:00 +0000 (0:00:03.057) 0:00:58.618 ******** 2026-01-05 00:48:39.491017 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:48:39.491038 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:48:39.491048 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:48:39.491057 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:48:39.491067 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:48:39.491077 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:48:39.491096 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:48:39.491106 | orchestrator | 2026-01-05 00:48:39.491124 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-01-05 00:48:39.491134 | orchestrator | Monday 05 January 2026 00:47:03 +0000 (0:00:03.094) 0:01:01.712 ******** 2026-01-05 00:48:39.491145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.491156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.491166 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.491176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.491186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.491332 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.491449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:48:39.491460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491488 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491563 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:48:39.491583 | orchestrator | 2026-01-05 00:48:39.491593 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-05 00:48:39.491603 | orchestrator | Monday 05 January 2026 00:47:08 +0000 (0:00:04.656) 0:01:06.369 ******** 2026-01-05 00:48:39.491612 | orchestrator | changed: [testbed-manager] => { 2026-01-05 00:48:39.491627 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:48:39.491644 | orchestrator | } 2026-01-05 00:48:39.491656 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:48:39.491667 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:48:39.491680 | orchestrator | } 2026-01-05 00:48:39.491691 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:48:39.491702 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:48:39.491713 | orchestrator | } 2026-01-05 00:48:39.491724 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:48:39.491735 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:48:39.491746 | orchestrator | } 2026-01-05 00:48:39.491758 | orchestrator | changed: [testbed-node-3] => { 2026-01-05 00:48:39.491773 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:48:39.491790 | orchestrator | } 2026-01-05 00:48:39.491809 | orchestrator | changed: [testbed-node-5] => { 2026-01-05 00:48:39.491826 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:48:39.491838 | orchestrator | } 2026-01-05 00:48:39.491851 | orchestrator | changed: [testbed-node-4] => { 2026-01-05 00:48:39.491864 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:48:39.491876 | orchestrator | } 2026-01-05 00:48:39.491889 | orchestrator | 2026-01-05 00:48:39.491903 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:48:39.491916 | orchestrator | Monday 05 January 2026 00:47:09 +0000 (0:00:01.354) 0:01:07.724 ******** 2026-01-05 00:48:39.491941 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.491957 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.491970 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.491984 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:48:39.491995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.492003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492028 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:39.492041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.492050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.492087 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:39.492096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492117 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:39.492125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.492137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492154 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:39.492166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.492175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492191 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:39.492199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:48:39.492300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:48:39.492324 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:39.492332 | orchestrator | 2026-01-05 00:48:39.492342 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-05 00:48:39.492356 | orchestrator | Monday 05 January 2026 00:47:12 +0000 (0:00:02.604) 0:01:10.328 ******** 2026-01-05 00:48:39.492370 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:39.492383 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:39.492397 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:39.492409 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:39.492430 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:39.492443 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:39.492451 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:39.492458 | orchestrator | 2026-01-05 00:48:39.492466 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-05 00:48:39.492474 | orchestrator | Monday 05 January 2026 00:47:14 +0000 (0:00:02.125) 0:01:12.454 ******** 2026-01-05 00:48:39.492482 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:39.492490 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:39.492498 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:39.492506 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:39.492513 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:39.492521 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:39.492529 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:39.492537 | orchestrator | 2026-01-05 00:48:39.492545 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:48:39.492553 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:01.537) 0:01:13.991 ******** 2026-01-05 00:48:39.492560 | orchestrator | 2026-01-05 00:48:39.492568 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:48:39.492576 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:00.070) 0:01:14.062 ******** 2026-01-05 00:48:39.492584 | orchestrator | 2026-01-05 00:48:39.492592 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:48:39.492600 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:00.068) 0:01:14.131 ******** 2026-01-05 00:48:39.492607 | orchestrator | 2026-01-05 00:48:39.492621 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:48:39.492630 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.310) 0:01:14.441 ******** 2026-01-05 00:48:39.492637 | orchestrator | 2026-01-05 00:48:39.492645 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:48:39.492653 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.073) 0:01:14.515 ******** 2026-01-05 00:48:39.492661 | orchestrator | 2026-01-05 00:48:39.492669 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:48:39.492677 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.088) 0:01:14.603 ******** 2026-01-05 00:48:39.492685 | orchestrator | 2026-01-05 00:48:39.492699 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:48:39.492725 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.177) 0:01:14.781 ******** 2026-01-05 00:48:39.492740 | orchestrator | 2026-01-05 00:48:39.492753 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-05 00:48:39.492767 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.126) 0:01:14.907 ******** 2026-01-05 00:48:39.492779 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:39.492793 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:39.492806 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:39.492819 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:39.492832 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:39.492844 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:39.492856 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:39.492869 | orchestrator | 2026-01-05 00:48:39.492883 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-05 00:48:39.492896 | orchestrator | Monday 05 January 2026 00:47:50 +0000 (0:00:33.295) 0:01:48.203 ******** 2026-01-05 00:48:39.492909 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:39.492922 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:39.492934 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:39.492947 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:39.492960 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:39.492973 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:39.492986 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:39.492999 | orchestrator | 2026-01-05 00:48:39.493012 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-05 00:48:39.493025 | orchestrator | Monday 05 January 2026 00:48:25 +0000 (0:00:35.711) 0:02:23.914 ******** 2026-01-05 00:48:39.493038 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:39.493051 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:39.493065 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:39.493078 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:39.493091 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:48:39.493104 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:48:39.493117 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:48:39.493130 | orchestrator | 2026-01-05 00:48:39.493144 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-05 00:48:39.493157 | orchestrator | Monday 05 January 2026 00:48:28 +0000 (0:00:02.465) 0:02:26.380 ******** 2026-01-05 00:48:39.493170 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:39.493183 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:39.493195 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:39.493237 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:39.493251 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:39.493264 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:39.493278 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:39.493291 | orchestrator | 2026-01-05 00:48:39.493304 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:48:39.493319 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:48:39.493334 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:48:39.493347 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:48:39.493361 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:48:39.493374 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:48:39.493397 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:48:39.493411 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:48:39.493425 | orchestrator | 2026-01-05 00:48:39.493438 | orchestrator | 2026-01-05 00:48:39.493451 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:48:39.493465 | orchestrator | Monday 05 January 2026 00:48:38 +0000 (0:00:09.914) 0:02:36.294 ******** 2026-01-05 00:48:39.493478 | orchestrator | =============================================================================== 2026-01-05 00:48:39.493491 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.71s 2026-01-05 00:48:39.493505 | orchestrator | common : Restart fluentd container ------------------------------------- 33.30s 2026-01-05 00:48:39.493518 | orchestrator | common : Restart cron container ----------------------------------------- 9.91s 2026-01-05 00:48:39.493532 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 6.38s 2026-01-05 00:48:39.493552 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.14s 2026-01-05 00:48:39.493565 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.04s 2026-01-05 00:48:39.493586 | orchestrator | common : Copying over config.json files for services -------------------- 4.97s 2026-01-05 00:48:39.493600 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.69s 2026-01-05 00:48:39.493613 | orchestrator | service-check-containers : common | Check containers -------------------- 4.66s 2026-01-05 00:48:39.493627 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.03s 2026-01-05 00:48:39.493640 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.88s 2026-01-05 00:48:39.493653 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.88s 2026-01-05 00:48:39.493667 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.09s 2026-01-05 00:48:39.493680 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.06s 2026-01-05 00:48:39.493693 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.76s 2026-01-05 00:48:39.493706 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.60s 2026-01-05 00:48:39.493720 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.47s 2026-01-05 00:48:39.493733 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.39s 2026-01-05 00:48:39.493746 | orchestrator | common : Creating log volume -------------------------------------------- 2.13s 2026-01-05 00:48:39.493760 | orchestrator | common : Find custom fluentd output config files ------------------------ 2.06s 2026-01-05 00:48:39.493774 | orchestrator | 2026-01-05 00:48:39 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:39.493789 | orchestrator | 2026-01-05 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:42.540910 | orchestrator | 2026-01-05 00:48:42 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:48:42.541672 | orchestrator | 2026-01-05 00:48:42 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:42.542976 | orchestrator | 2026-01-05 00:48:42 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:48:42.543862 | orchestrator | 2026-01-05 00:48:42 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:48:42.545030 | orchestrator | 2026-01-05 00:48:42 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state STARTED 2026-01-05 00:48:42.545854 | orchestrator | 2026-01-05 00:48:42 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:42.545994 | orchestrator | 2026-01-05 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:45.574403 | orchestrator | 2026-01-05 00:48:45 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:48:45.574524 | orchestrator | 2026-01-05 00:48:45 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:45.575021 | orchestrator | 2026-01-05 00:48:45 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:48:45.575876 | orchestrator | 2026-01-05 00:48:45 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:48:45.579182 | orchestrator | 2026-01-05 00:48:45 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state STARTED 2026-01-05 00:48:45.579315 | orchestrator | 2026-01-05 00:48:45 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:45.579353 | orchestrator | 2026-01-05 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:48.605876 | orchestrator | 2026-01-05 00:48:48 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:48:48.605988 | orchestrator | 2026-01-05 00:48:48 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:48.606002 | orchestrator | 2026-01-05 00:48:48 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:48:48.606012 | orchestrator | 2026-01-05 00:48:48 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:48:48.606082 | orchestrator | 2026-01-05 00:48:48 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state STARTED 2026-01-05 00:48:48.608288 | orchestrator | 2026-01-05 00:48:48 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:48.608411 | orchestrator | 2026-01-05 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:51.762917 | orchestrator | 2026-01-05 00:48:51 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:48:51.763013 | orchestrator | 2026-01-05 00:48:51 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:51.763025 | orchestrator | 2026-01-05 00:48:51 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:48:51.763034 | orchestrator | 2026-01-05 00:48:51 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:48:51.763043 | orchestrator | 2026-01-05 00:48:51 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state STARTED 2026-01-05 00:48:51.763051 | orchestrator | 2026-01-05 00:48:51 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:51.763060 | orchestrator | 2026-01-05 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:54.696123 | orchestrator | 2026-01-05 00:48:54 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:48:54.696276 | orchestrator | 2026-01-05 00:48:54 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:54.696545 | orchestrator | 2026-01-05 00:48:54 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:48:54.697105 | orchestrator | 2026-01-05 00:48:54 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:48:54.698621 | orchestrator | 2026-01-05 00:48:54 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state STARTED 2026-01-05 00:48:54.699124 | orchestrator | 2026-01-05 00:48:54 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:54.699209 | orchestrator | 2026-01-05 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:57.739861 | orchestrator | 2026-01-05 00:48:57 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:48:57.742275 | orchestrator | 2026-01-05 00:48:57 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:48:57.742334 | orchestrator | 2026-01-05 00:48:57 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:48:57.742347 | orchestrator | 2026-01-05 00:48:57 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:48:57.742376 | orchestrator | 2026-01-05 00:48:57 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state STARTED 2026-01-05 00:48:57.742388 | orchestrator | 2026-01-05 00:48:57 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:48:57.742447 | orchestrator | 2026-01-05 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:00.772643 | orchestrator | 2026-01-05 00:49:00 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:00.775938 | orchestrator | 2026-01-05 00:49:00 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:00.780985 | orchestrator | 2026-01-05 00:49:00 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:00.784046 | orchestrator | 2026-01-05 00:49:00 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:49:00.790361 | orchestrator | 2026-01-05 00:49:00 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state STARTED 2026-01-05 00:49:00.790979 | orchestrator | 2026-01-05 00:49:00 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:00.791445 | orchestrator | 2026-01-05 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:03.863825 | orchestrator | 2026-01-05 00:49:03 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:03.863950 | orchestrator | 2026-01-05 00:49:03 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:03.863965 | orchestrator | 2026-01-05 00:49:03 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:03.863977 | orchestrator | 2026-01-05 00:49:03 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:49:03.863987 | orchestrator | 2026-01-05 00:49:03 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state STARTED 2026-01-05 00:49:03.863997 | orchestrator | 2026-01-05 00:49:03 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:03.864008 | orchestrator | 2026-01-05 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:06.907842 | orchestrator | 2026-01-05 00:49:06 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:06.908850 | orchestrator | 2026-01-05 00:49:06 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:06.910477 | orchestrator | 2026-01-05 00:49:06 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:06.912260 | orchestrator | 2026-01-05 00:49:06 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:49:06.914562 | orchestrator | 2026-01-05 00:49:06 | INFO  | Task 3a0d308d-e94f-49fd-b3b4-18947ae8204d is in state SUCCESS 2026-01-05 00:49:06.914833 | orchestrator | 2026-01-05 00:49:06.914864 | orchestrator | 2026-01-05 00:49:06.914883 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:49:06.914941 | orchestrator | 2026-01-05 00:49:06.914963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:49:06.914982 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:00.746) 0:00:00.746 ******** 2026-01-05 00:49:06.915001 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:49:06.915022 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:49:06.915039 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:49:06.915058 | orchestrator | 2026-01-05 00:49:06.915070 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:49:06.915081 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:00.437) 0:00:01.184 ******** 2026-01-05 00:49:06.915093 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-05 00:49:06.915104 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-05 00:49:06.915115 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-05 00:49:06.915126 | orchestrator | 2026-01-05 00:49:06.915137 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-05 00:49:06.915148 | orchestrator | 2026-01-05 00:49:06.915158 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-05 00:49:06.915169 | orchestrator | Monday 05 January 2026 00:48:48 +0000 (0:00:00.713) 0:00:01.898 ******** 2026-01-05 00:49:06.915214 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:49:06.915226 | orchestrator | 2026-01-05 00:49:06.915237 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-05 00:49:06.915248 | orchestrator | Monday 05 January 2026 00:48:49 +0000 (0:00:01.109) 0:00:03.007 ******** 2026-01-05 00:49:06.915259 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-05 00:49:06.915271 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-05 00:49:06.915282 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-05 00:49:06.915293 | orchestrator | 2026-01-05 00:49:06.915304 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-05 00:49:06.915315 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:00.965) 0:00:03.973 ******** 2026-01-05 00:49:06.915326 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-05 00:49:06.915337 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-05 00:49:06.915348 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-05 00:49:06.915359 | orchestrator | 2026-01-05 00:49:06.915370 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-01-05 00:49:06.915381 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:02.982) 0:00:06.955 ******** 2026-01-05 00:49:06.915398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:49:06.915431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:49:06.915472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:49:06.915486 | orchestrator | 2026-01-05 00:49:06.915500 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-01-05 00:49:06.915513 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:01.526) 0:00:08.481 ******** 2026-01-05 00:49:06.915526 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:49:06.915539 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:06.915552 | orchestrator | } 2026-01-05 00:49:06.915566 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:49:06.915579 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:06.915592 | orchestrator | } 2026-01-05 00:49:06.915606 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:49:06.915619 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:06.915631 | orchestrator | } 2026-01-05 00:49:06.915644 | orchestrator | 2026-01-05 00:49:06.915657 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:49:06.915669 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:00.381) 0:00:08.862 ******** 2026-01-05 00:49:06.915683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:49:06.915697 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:49:06.915711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:49:06.915723 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:49:06.915743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:49:06.915763 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:49:06.915777 | orchestrator | 2026-01-05 00:49:06.915790 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-05 00:49:06.915803 | orchestrator | Monday 05 January 2026 00:48:57 +0000 (0:00:01.791) 0:00:10.654 ******** 2026-01-05 00:49:06.915814 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:49:06.915825 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:49:06.915836 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:49:06.915846 | orchestrator | 2026-01-05 00:49:06.915858 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:49:06.915878 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:49:06.915899 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:49:06.915919 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:49:06.915939 | orchestrator | 2026-01-05 00:49:06.915960 | orchestrator | 2026-01-05 00:49:06.915980 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:49:06.916000 | orchestrator | Monday 05 January 2026 00:49:04 +0000 (0:00:07.077) 0:00:17.731 ******** 2026-01-05 00:49:06.916032 | orchestrator | =============================================================================== 2026-01-05 00:49:06.916045 | orchestrator | memcached : Restart memcached container --------------------------------- 7.08s 2026-01-05 00:49:06.916056 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.98s 2026-01-05 00:49:06.916068 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.79s 2026-01-05 00:49:06.916079 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.53s 2026-01-05 00:49:06.916089 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.11s 2026-01-05 00:49:06.916100 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.97s 2026-01-05 00:49:06.916111 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-01-05 00:49:06.916122 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2026-01-05 00:49:06.916133 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.38s 2026-01-05 00:49:06.916143 | orchestrator | 2026-01-05 00:49:06 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:06.916159 | orchestrator | 2026-01-05 00:49:06 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:06.916397 | orchestrator | 2026-01-05 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:10.004015 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:10.028891 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:10.031531 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:10.031588 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:49:10.032290 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:10.032875 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:10.032903 | orchestrator | 2026-01-05 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:13.062112 | orchestrator | 2026-01-05 00:49:13 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:13.063413 | orchestrator | 2026-01-05 00:49:13 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:13.067227 | orchestrator | 2026-01-05 00:49:13 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:13.067964 | orchestrator | 2026-01-05 00:49:13 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state STARTED 2026-01-05 00:49:13.068866 | orchestrator | 2026-01-05 00:49:13 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:13.069762 | orchestrator | 2026-01-05 00:49:13 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:13.070262 | orchestrator | 2026-01-05 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:16.099967 | orchestrator | 2026-01-05 00:49:16 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:16.104120 | orchestrator | 2026-01-05 00:49:16 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:16.104718 | orchestrator | 2026-01-05 00:49:16 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:16.105913 | orchestrator | 2026-01-05 00:49:16 | INFO  | Task 56d33035-d8c6-4bf5-ae45-2d337a472b8d is in state SUCCESS 2026-01-05 00:49:16.107043 | orchestrator | 2026-01-05 00:49:16.107098 | orchestrator | 2026-01-05 00:49:16.107113 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:49:16.107127 | orchestrator | 2026-01-05 00:49:16.107140 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:49:16.107153 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:00.473) 0:00:00.473 ******** 2026-01-05 00:49:16.107276 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:49:16.107296 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:49:16.107310 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:49:16.107322 | orchestrator | 2026-01-05 00:49:16.107335 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:49:16.107348 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:00.340) 0:00:00.814 ******** 2026-01-05 00:49:16.107361 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-05 00:49:16.107375 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-05 00:49:16.107387 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-05 00:49:16.107400 | orchestrator | 2026-01-05 00:49:16.107413 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-05 00:49:16.107573 | orchestrator | 2026-01-05 00:49:16.107589 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-05 00:49:16.107603 | orchestrator | Monday 05 January 2026 00:48:48 +0000 (0:00:00.747) 0:00:01.561 ******** 2026-01-05 00:49:16.107616 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:49:16.107632 | orchestrator | 2026-01-05 00:49:16.107645 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-05 00:49:16.107664 | orchestrator | Monday 05 January 2026 00:48:49 +0000 (0:00:00.898) 0:00:02.461 ******** 2026-01-05 00:49:16.107681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107851 | orchestrator | 2026-01-05 00:49:16.107864 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-05 00:49:16.107878 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:01.732) 0:00:04.194 ******** 2026-01-05 00:49:16.107892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.107990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108004 | orchestrator | 2026-01-05 00:49:16.108017 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-05 00:49:16.108031 | orchestrator | Monday 05 January 2026 00:48:54 +0000 (0:00:03.641) 0:00:07.835 ******** 2026-01-05 00:49:16.108045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108152 | orchestrator | 2026-01-05 00:49:16.108192 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-01-05 00:49:16.108207 | orchestrator | Monday 05 January 2026 00:48:57 +0000 (0:00:03.199) 0:00:11.034 ******** 2026-01-05 00:49:16.108222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:49:16.108331 | orchestrator | 2026-01-05 00:49:16.108340 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-01-05 00:49:16.108350 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:02.228) 0:00:13.263 ******** 2026-01-05 00:49:16.108366 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:49:16.108376 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:16.108386 | orchestrator | } 2026-01-05 00:49:16.108395 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:49:16.108405 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:16.108414 | orchestrator | } 2026-01-05 00:49:16.108423 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:49:16.108433 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:16.108442 | orchestrator | } 2026-01-05 00:49:16.108452 | orchestrator | 2026-01-05 00:49:16.108461 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:49:16.108470 | orchestrator | Monday 05 January 2026 00:49:00 +0000 (0:00:00.458) 0:00:13.721 ******** 2026-01-05 00:49:16.108480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-05 00:49:16.108490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-05 00:49:16.108499 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:49:16.108508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-05 00:49:16.108518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-05 00:49:16.108528 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:49:16.108538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-05 00:49:16.108558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-05 00:49:16.108566 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:49:16.108575 | orchestrator | 2026-01-05 00:49:16.108589 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:49:16.108597 | orchestrator | Monday 05 January 2026 00:49:01 +0000 (0:00:01.048) 0:00:14.770 ******** 2026-01-05 00:49:16.108605 | orchestrator | 2026-01-05 00:49:16.108613 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:49:16.108621 | orchestrator | Monday 05 January 2026 00:49:01 +0000 (0:00:00.077) 0:00:14.847 ******** 2026-01-05 00:49:16.108629 | orchestrator | 2026-01-05 00:49:16.108637 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:49:16.108645 | orchestrator | Monday 05 January 2026 00:49:01 +0000 (0:00:00.090) 0:00:14.937 ******** 2026-01-05 00:49:16.108653 | orchestrator | 2026-01-05 00:49:16.108660 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-05 00:49:16.108833 | orchestrator | Monday 05 January 2026 00:49:01 +0000 (0:00:00.072) 0:00:15.009 ******** 2026-01-05 00:49:16.108841 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:49:16.108850 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:49:16.108858 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:49:16.108866 | orchestrator | 2026-01-05 00:49:16.108874 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-05 00:49:16.108895 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:08.527) 0:00:23.537 ******** 2026-01-05 00:49:16.108913 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:49:16.108921 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:49:16.108929 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:49:16.108938 | orchestrator | 2026-01-05 00:49:16.108946 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:49:16.108955 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:49:16.108966 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:49:16.108974 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:49:16.108982 | orchestrator | 2026-01-05 00:49:16.108991 | orchestrator | 2026-01-05 00:49:16.108999 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:49:16.109007 | orchestrator | Monday 05 January 2026 00:49:15 +0000 (0:00:05.049) 0:00:28.586 ******** 2026-01-05 00:49:16.109015 | orchestrator | =============================================================================== 2026-01-05 00:49:16.109023 | orchestrator | redis : Restart redis container ----------------------------------------- 8.53s 2026-01-05 00:49:16.109031 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.05s 2026-01-05 00:49:16.109039 | orchestrator | redis : Copying over default config.json files -------------------------- 3.64s 2026-01-05 00:49:16.109047 | orchestrator | redis : Copying over redis config files --------------------------------- 3.20s 2026-01-05 00:49:16.109054 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.23s 2026-01-05 00:49:16.109070 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.73s 2026-01-05 00:49:16.109078 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.05s 2026-01-05 00:49:16.109086 | orchestrator | redis : include_tasks --------------------------------------------------- 0.90s 2026-01-05 00:49:16.109094 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-01-05 00:49:16.109102 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.46s 2026-01-05 00:49:16.109110 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-01-05 00:49:16.109118 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-01-05 00:49:16.109126 | orchestrator | 2026-01-05 00:49:16 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:16.109144 | orchestrator | 2026-01-05 00:49:16 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:16.109153 | orchestrator | 2026-01-05 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:19.168987 | orchestrator | 2026-01-05 00:49:19 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:19.169093 | orchestrator | 2026-01-05 00:49:19 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:19.169108 | orchestrator | 2026-01-05 00:49:19 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:19.169117 | orchestrator | 2026-01-05 00:49:19 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:19.169128 | orchestrator | 2026-01-05 00:49:19 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:19.169139 | orchestrator | 2026-01-05 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:22.204691 | orchestrator | 2026-01-05 00:49:22 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:22.205001 | orchestrator | 2026-01-05 00:49:22 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:22.205824 | orchestrator | 2026-01-05 00:49:22 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:22.207029 | orchestrator | 2026-01-05 00:49:22 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:22.208387 | orchestrator | 2026-01-05 00:49:22 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:22.208451 | orchestrator | 2026-01-05 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:25.230912 | orchestrator | 2026-01-05 00:49:25 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:25.231197 | orchestrator | 2026-01-05 00:49:25 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:25.232010 | orchestrator | 2026-01-05 00:49:25 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:25.232895 | orchestrator | 2026-01-05 00:49:25 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:25.233479 | orchestrator | 2026-01-05 00:49:25 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:25.233578 | orchestrator | 2026-01-05 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:28.267577 | orchestrator | 2026-01-05 00:49:28 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:28.268771 | orchestrator | 2026-01-05 00:49:28 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:28.269701 | orchestrator | 2026-01-05 00:49:28 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:28.270854 | orchestrator | 2026-01-05 00:49:28 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:28.272320 | orchestrator | 2026-01-05 00:49:28 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:28.272358 | orchestrator | 2026-01-05 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:31.372850 | orchestrator | 2026-01-05 00:49:31 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:31.372982 | orchestrator | 2026-01-05 00:49:31 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:31.374368 | orchestrator | 2026-01-05 00:49:31 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:31.374835 | orchestrator | 2026-01-05 00:49:31 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:31.375535 | orchestrator | 2026-01-05 00:49:31 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:31.375556 | orchestrator | 2026-01-05 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:34.403132 | orchestrator | 2026-01-05 00:49:34 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:34.403554 | orchestrator | 2026-01-05 00:49:34 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:34.404098 | orchestrator | 2026-01-05 00:49:34 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:34.404939 | orchestrator | 2026-01-05 00:49:34 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:34.405768 | orchestrator | 2026-01-05 00:49:34 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:34.405809 | orchestrator | 2026-01-05 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:37.442199 | orchestrator | 2026-01-05 00:49:37 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:37.442305 | orchestrator | 2026-01-05 00:49:37 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:37.443532 | orchestrator | 2026-01-05 00:49:37 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:37.444881 | orchestrator | 2026-01-05 00:49:37 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:37.447088 | orchestrator | 2026-01-05 00:49:37 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:37.447134 | orchestrator | 2026-01-05 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:40.490286 | orchestrator | 2026-01-05 00:49:40 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:40.491109 | orchestrator | 2026-01-05 00:49:40 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:40.492024 | orchestrator | 2026-01-05 00:49:40 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:40.492796 | orchestrator | 2026-01-05 00:49:40 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:40.493612 | orchestrator | 2026-01-05 00:49:40 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:40.493751 | orchestrator | 2026-01-05 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:43.527783 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:43.528087 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:43.528228 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:43.529399 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:43.530345 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:43.530415 | orchestrator | 2026-01-05 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:46.560979 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:46.561375 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:46.562270 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:46.564053 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:46.565413 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:46.566217 | orchestrator | 2026-01-05 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:49.607861 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:49.609612 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:49.611449 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:49.612922 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:49.615019 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:49.615086 | orchestrator | 2026-01-05 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:52.654821 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state STARTED 2026-01-05 00:49:52.654906 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:52.655539 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:52.656771 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:52.657537 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:52.657574 | orchestrator | 2026-01-05 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:55.689595 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task d61797ed-259a-4c72-8e12-eff4cc132d0b is in state SUCCESS 2026-01-05 00:49:55.690547 | orchestrator | 2026-01-05 00:49:55.690593 | orchestrator | 2026-01-05 00:49:55.690602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:49:55.690611 | orchestrator | 2026-01-05 00:49:55.690619 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:49:55.690641 | orchestrator | Monday 05 January 2026 00:48:45 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-01-05 00:49:55.690657 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:49:55.690687 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:49:55.690695 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:49:55.690702 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:49:55.690709 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:49:55.690716 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:49:55.690724 | orchestrator | 2026-01-05 00:49:55.690731 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:49:55.690738 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:01.257) 0:00:01.528 ******** 2026-01-05 00:49:55.690746 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:49:55.690754 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:49:55.690761 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:49:55.690768 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:49:55.690775 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:49:55.690782 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:49:55.690790 | orchestrator | 2026-01-05 00:49:55.690797 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-05 00:49:55.690804 | orchestrator | 2026-01-05 00:49:55.690812 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-05 00:49:55.691033 | orchestrator | Monday 05 January 2026 00:48:48 +0000 (0:00:01.319) 0:00:02.848 ******** 2026-01-05 00:49:55.691049 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:49:55.691059 | orchestrator | 2026-01-05 00:49:55.691066 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 00:49:55.691074 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:02.193) 0:00:05.041 ******** 2026-01-05 00:49:55.691081 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-05 00:49:55.691089 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-05 00:49:55.691097 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-05 00:49:55.691104 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-05 00:49:55.691112 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-05 00:49:55.691140 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-05 00:49:55.691152 | orchestrator | 2026-01-05 00:49:55.691163 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 00:49:55.691170 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:02.424) 0:00:07.465 ******** 2026-01-05 00:49:55.691178 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-05 00:49:55.691185 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-05 00:49:55.691192 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-05 00:49:55.691199 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-05 00:49:55.691206 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-05 00:49:55.691214 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-05 00:49:55.691221 | orchestrator | 2026-01-05 00:49:55.691228 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 00:49:55.691235 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:02.668) 0:00:10.133 ******** 2026-01-05 00:49:55.691242 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-05 00:49:55.691250 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:49:55.691258 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-05 00:49:55.691265 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:49:55.691272 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-05 00:49:55.691279 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:49:55.691296 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-05 00:49:55.691303 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:49:55.691310 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-05 00:49:55.691317 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:49:55.691324 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-05 00:49:55.691331 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:49:55.691338 | orchestrator | 2026-01-05 00:49:55.691346 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-05 00:49:55.691353 | orchestrator | Monday 05 January 2026 00:48:58 +0000 (0:00:02.381) 0:00:12.514 ******** 2026-01-05 00:49:55.691360 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:49:55.691367 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:49:55.691374 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:49:55.691381 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:49:55.691403 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:49:55.691411 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:49:55.691418 | orchestrator | 2026-01-05 00:49:55.691430 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-05 00:49:55.691440 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:00.813) 0:00:13.328 ******** 2026-01-05 00:49:55.691460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691589 | orchestrator | 2026-01-05 00:49:55.691597 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-05 00:49:55.691604 | orchestrator | Monday 05 January 2026 00:49:00 +0000 (0:00:01.608) 0:00:14.937 ******** 2026-01-05 00:49:55.691612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.691746 | orchestrator | 2026-01-05 00:49:55.691755 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-05 00:49:55.691764 | orchestrator | Monday 05 January 2026 00:49:03 +0000 (0:00:03.105) 0:00:18.042 ******** 2026-01-05 00:49:55.691772 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:49:55.691783 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:49:55.691795 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:49:55.691807 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:49:55.691819 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:49:55.691832 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:49:55.691840 | orchestrator | 2026-01-05 00:49:55.691849 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-01-05 00:49:55.691857 | orchestrator | Monday 05 January 2026 00:49:05 +0000 (0:00:01.402) 0:00:19.444 ******** 2026-01-05 00:49:55.692015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:49:55.692285 | orchestrator | 2026-01-05 00:49:55.692293 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-01-05 00:49:55.692301 | orchestrator | Monday 05 January 2026 00:49:08 +0000 (0:00:03.788) 0:00:23.233 ******** 2026-01-05 00:49:55.692308 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:49:55.692316 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:55.692324 | orchestrator | } 2026-01-05 00:49:55.692331 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:49:55.692339 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:55.692346 | orchestrator | } 2026-01-05 00:49:55.692353 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:49:55.692360 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:55.692367 | orchestrator | } 2026-01-05 00:49:55.692375 | orchestrator | changed: [testbed-node-3] => { 2026-01-05 00:49:55.692382 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:55.692389 | orchestrator | } 2026-01-05 00:49:55.692396 | orchestrator | changed: [testbed-node-4] => { 2026-01-05 00:49:55.692403 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:55.692410 | orchestrator | } 2026-01-05 00:49:55.692417 | orchestrator | changed: [testbed-node-5] => { 2026-01-05 00:49:55.692425 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:49:55.692432 | orchestrator | } 2026-01-05 00:49:55.692439 | orchestrator | 2026-01-05 00:49:55.692446 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:49:55.692454 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:01.245) 0:00:24.478 ******** 2026-01-05 00:49:55.692461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-05 00:49:55.692469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-05 00:49:55.692483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-05 00:49:55.692515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-05 00:49:55.692528 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:49:55.692536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-05 00:49:55.693051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-05 00:49:55.693064 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:49:55.693072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-05 00:49:55.693080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-05 00:49:55.693087 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:49:55.693095 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:49:55.693110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-05 00:49:55.693154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-05 00:49:55.693164 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:49:55.693175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-05 00:49:55.693183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-05 00:49:55.693190 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:49:55.693198 | orchestrator | 2026-01-05 00:49:55.693205 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:49:55.693212 | orchestrator | Monday 05 January 2026 00:49:12 +0000 (0:00:02.776) 0:00:27.254 ******** 2026-01-05 00:49:55.693220 | orchestrator | 2026-01-05 00:49:55.693227 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:49:55.693234 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.144) 0:00:27.399 ******** 2026-01-05 00:49:55.693241 | orchestrator | 2026-01-05 00:49:55.693248 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:49:55.693255 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.106) 0:00:27.506 ******** 2026-01-05 00:49:55.693262 | orchestrator | 2026-01-05 00:49:55.693270 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:49:55.693277 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.152) 0:00:27.658 ******** 2026-01-05 00:49:55.693284 | orchestrator | 2026-01-05 00:49:55.693291 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:49:55.693298 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.308) 0:00:27.967 ******** 2026-01-05 00:49:55.693305 | orchestrator | 2026-01-05 00:49:55.693317 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:49:55.693325 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.197) 0:00:28.164 ******** 2026-01-05 00:49:55.693332 | orchestrator | 2026-01-05 00:49:55.693339 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-05 00:49:55.693346 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.142) 0:00:28.307 ******** 2026-01-05 00:49:55.693353 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:49:55.693360 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:49:55.693367 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:49:55.693374 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:49:55.693381 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:49:55.693389 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:49:55.693396 | orchestrator | 2026-01-05 00:49:55.693462 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-05 00:49:55.693476 | orchestrator | Monday 05 January 2026 00:49:23 +0000 (0:00:09.613) 0:00:37.920 ******** 2026-01-05 00:49:55.693484 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:49:55.693492 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:49:55.693499 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:49:55.693506 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:49:55.693513 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:49:55.693521 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:49:55.693528 | orchestrator | 2026-01-05 00:49:55.693535 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 00:49:55.693542 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:01.335) 0:00:39.256 ******** 2026-01-05 00:49:55.693550 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:49:55.693557 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:49:55.693564 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:49:55.693571 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:49:55.693579 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:49:55.693586 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:49:55.693593 | orchestrator | 2026-01-05 00:49:55.693600 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-05 00:49:55.693607 | orchestrator | Monday 05 January 2026 00:49:30 +0000 (0:00:05.649) 0:00:44.906 ******** 2026-01-05 00:49:55.693615 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-05 00:49:55.693623 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-05 00:49:55.693630 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-05 00:49:55.693637 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-05 00:49:55.693648 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-05 00:49:55.693656 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-05 00:49:55.693663 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-05 00:49:55.693670 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-05 00:49:55.693677 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-05 00:49:55.693684 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-05 00:49:55.693691 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-05 00:49:55.693699 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-05 00:49:55.693712 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:49:55.693719 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:49:55.693726 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:49:55.693733 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:49:55.693740 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:49:55.693747 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:49:55.693755 | orchestrator | 2026-01-05 00:49:55.693762 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-05 00:49:55.693769 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:08.977) 0:00:53.883 ******** 2026-01-05 00:49:55.693872 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-05 00:49:55.693883 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:49:55.693891 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-05 00:49:55.693898 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:49:55.693905 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-05 00:49:55.693913 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:49:55.693923 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-05 00:49:55.693935 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-05 00:49:55.693947 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-05 00:49:55.693959 | orchestrator | 2026-01-05 00:49:55.693977 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-05 00:49:55.693993 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:02.558) 0:00:56.441 ******** 2026-01-05 00:49:55.694005 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:49:55.694079 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:49:55.694094 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:49:55.694101 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:49:55.694108 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:49:55.694134 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:49:55.694143 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:49:55.694158 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:49:55.694166 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:49:55.694173 | orchestrator | 2026-01-05 00:49:55.694180 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 00:49:55.694187 | orchestrator | Monday 05 January 2026 00:49:45 +0000 (0:00:03.769) 0:01:00.211 ******** 2026-01-05 00:49:55.694195 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:49:55.694202 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:49:55.694210 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:49:55.694219 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:49:55.694227 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:49:55.694236 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:49:55.694245 | orchestrator | 2026-01-05 00:49:55.694253 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:49:55.694263 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:49:55.694273 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:49:55.694292 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:49:55.694300 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:49:55.694315 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:49:55.694324 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:49:55.694333 | orchestrator | 2026-01-05 00:49:55.694342 | orchestrator | 2026-01-05 00:49:55.694351 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:49:55.694359 | orchestrator | Monday 05 January 2026 00:49:53 +0000 (0:00:07.746) 0:01:07.957 ******** 2026-01-05 00:49:55.694368 | orchestrator | =============================================================================== 2026-01-05 00:49:55.694377 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.40s 2026-01-05 00:49:55.694386 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.61s 2026-01-05 00:49:55.694464 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.97s 2026-01-05 00:49:55.694473 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.79s 2026-01-05 00:49:55.694542 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.77s 2026-01-05 00:49:55.694554 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.11s 2026-01-05 00:49:55.694563 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.78s 2026-01-05 00:49:55.694572 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.67s 2026-01-05 00:49:55.694581 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.56s 2026-01-05 00:49:55.694590 | orchestrator | module-load : Load modules ---------------------------------------------- 2.42s 2026-01-05 00:49:55.694598 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.38s 2026-01-05 00:49:55.694607 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.19s 2026-01-05 00:49:55.694616 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.61s 2026-01-05 00:49:55.694625 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.40s 2026-01-05 00:49:55.694634 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.34s 2026-01-05 00:49:55.694642 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.32s 2026-01-05 00:49:55.694651 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.26s 2026-01-05 00:49:55.694660 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.25s 2026-01-05 00:49:55.694669 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2026-01-05 00:49:55.694678 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.81s 2026-01-05 00:49:55.694687 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:55.694696 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:55.694704 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:49:55.694713 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:55.694727 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:55.694744 | orchestrator | 2026-01-05 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:58.723503 | orchestrator | 2026-01-05 00:49:58 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:49:58.723743 | orchestrator | 2026-01-05 00:49:58 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:49:58.724670 | orchestrator | 2026-01-05 00:49:58 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:49:58.726964 | orchestrator | 2026-01-05 00:49:58 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:49:58.727857 | orchestrator | 2026-01-05 00:49:58 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:49:58.728504 | orchestrator | 2026-01-05 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:01.789932 | orchestrator | 2026-01-05 00:50:01 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:01.790070 | orchestrator | 2026-01-05 00:50:01 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:01.790080 | orchestrator | 2026-01-05 00:50:01 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:01.790087 | orchestrator | 2026-01-05 00:50:01 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:01.790155 | orchestrator | 2026-01-05 00:50:01 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:01.790167 | orchestrator | 2026-01-05 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:04.813764 | orchestrator | 2026-01-05 00:50:04 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:04.814987 | orchestrator | 2026-01-05 00:50:04 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:04.816699 | orchestrator | 2026-01-05 00:50:04 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:04.818508 | orchestrator | 2026-01-05 00:50:04 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:04.819960 | orchestrator | 2026-01-05 00:50:04 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:04.820232 | orchestrator | 2026-01-05 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:07.856885 | orchestrator | 2026-01-05 00:50:07 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:07.857628 | orchestrator | 2026-01-05 00:50:07 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:07.858873 | orchestrator | 2026-01-05 00:50:07 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:07.859926 | orchestrator | 2026-01-05 00:50:07 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:07.860936 | orchestrator | 2026-01-05 00:50:07 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:07.861038 | orchestrator | 2026-01-05 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:10.921501 | orchestrator | 2026-01-05 00:50:10 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:10.922326 | orchestrator | 2026-01-05 00:50:10 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:10.924381 | orchestrator | 2026-01-05 00:50:10 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:10.924930 | orchestrator | 2026-01-05 00:50:10 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:10.925850 | orchestrator | 2026-01-05 00:50:10 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:10.926744 | orchestrator | 2026-01-05 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:13.962363 | orchestrator | 2026-01-05 00:50:13 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:13.963046 | orchestrator | 2026-01-05 00:50:13 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:13.965073 | orchestrator | 2026-01-05 00:50:13 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:13.966320 | orchestrator | 2026-01-05 00:50:13 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:13.967957 | orchestrator | 2026-01-05 00:50:13 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:13.968922 | orchestrator | 2026-01-05 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:17.007399 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:17.007549 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:17.009250 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:17.010332 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:17.011576 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:17.011608 | orchestrator | 2026-01-05 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:20.655030 | orchestrator | 2026-01-05 00:50:20 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:20.655395 | orchestrator | 2026-01-05 00:50:20 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:20.656704 | orchestrator | 2026-01-05 00:50:20 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:20.657455 | orchestrator | 2026-01-05 00:50:20 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:20.659116 | orchestrator | 2026-01-05 00:50:20 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:20.659190 | orchestrator | 2026-01-05 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:23.694769 | orchestrator | 2026-01-05 00:50:23 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:23.694929 | orchestrator | 2026-01-05 00:50:23 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:23.695535 | orchestrator | 2026-01-05 00:50:23 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:23.696156 | orchestrator | 2026-01-05 00:50:23 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:23.697028 | orchestrator | 2026-01-05 00:50:23 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:23.697061 | orchestrator | 2026-01-05 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:26.760858 | orchestrator | 2026-01-05 00:50:26 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:26.761007 | orchestrator | 2026-01-05 00:50:26 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:26.761057 | orchestrator | 2026-01-05 00:50:26 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:26.761075 | orchestrator | 2026-01-05 00:50:26 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:26.761117 | orchestrator | 2026-01-05 00:50:26 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:26.761131 | orchestrator | 2026-01-05 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:29.905730 | orchestrator | 2026-01-05 00:50:29 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:29.906955 | orchestrator | 2026-01-05 00:50:29 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:29.908711 | orchestrator | 2026-01-05 00:50:29 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:29.910373 | orchestrator | 2026-01-05 00:50:29 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:29.911630 | orchestrator | 2026-01-05 00:50:29 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state STARTED 2026-01-05 00:50:29.911674 | orchestrator | 2026-01-05 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:32.943151 | orchestrator | 2026-01-05 00:50:32 | INFO  | Task aa59028d-85f9-4abb-bd4a-bd37a96643e9 is in state STARTED 2026-01-05 00:50:32.943501 | orchestrator | 2026-01-05 00:50:32 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:32.944404 | orchestrator | 2026-01-05 00:50:32 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:32.948156 | orchestrator | 2026-01-05 00:50:32 | INFO  | Task 78a20174-eb9b-4159-9b1b-80e91cb1b08f is in state STARTED 2026-01-05 00:50:32.948811 | orchestrator | 2026-01-05 00:50:32 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:32.949675 | orchestrator | 2026-01-05 00:50:32 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:32.952487 | orchestrator | 2026-01-05 00:50:32 | INFO  | Task 1b5142c3-d874-4968-b4e2-d6e91c2e3707 is in state SUCCESS 2026-01-05 00:50:32.954271 | orchestrator | 2026-01-05 00:50:32.954324 | orchestrator | 2026-01-05 00:50:32.954333 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-05 00:50:32.954340 | orchestrator | 2026-01-05 00:50:32.954345 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-05 00:50:32.954351 | orchestrator | Monday 05 January 2026 00:46:02 +0000 (0:00:00.209) 0:00:00.209 ******** 2026-01-05 00:50:32.954356 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:32.954363 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:32.954368 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:32.954373 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.954381 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.954390 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.954399 | orchestrator | 2026-01-05 00:50:32.954407 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-05 00:50:32.954417 | orchestrator | Monday 05 January 2026 00:46:03 +0000 (0:00:00.877) 0:00:01.087 ******** 2026-01-05 00:50:32.954426 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.954436 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.954444 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.954452 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.954461 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.954469 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.954474 | orchestrator | 2026-01-05 00:50:32.954478 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-05 00:50:32.954519 | orchestrator | Monday 05 January 2026 00:46:04 +0000 (0:00:00.753) 0:00:01.840 ******** 2026-01-05 00:50:32.954525 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.954530 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.954534 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.954539 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.954544 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.954549 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.954553 | orchestrator | 2026-01-05 00:50:32.954558 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-05 00:50:32.954563 | orchestrator | Monday 05 January 2026 00:46:05 +0000 (0:00:00.788) 0:00:02.629 ******** 2026-01-05 00:50:32.954567 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:32.954572 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:32.954577 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:32.954582 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.954587 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.954592 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.954596 | orchestrator | 2026-01-05 00:50:32.954601 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-05 00:50:32.954606 | orchestrator | Monday 05 January 2026 00:46:07 +0000 (0:00:02.192) 0:00:04.822 ******** 2026-01-05 00:50:32.954611 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:32.954615 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:32.954620 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:32.954625 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.954629 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.954634 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.954639 | orchestrator | 2026-01-05 00:50:32.954644 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-05 00:50:32.954648 | orchestrator | Monday 05 January 2026 00:46:08 +0000 (0:00:01.096) 0:00:05.918 ******** 2026-01-05 00:50:32.954653 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:32.954658 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:32.954663 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:32.954667 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.954672 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.954677 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.954682 | orchestrator | 2026-01-05 00:50:32.954686 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-05 00:50:32.954691 | orchestrator | Monday 05 January 2026 00:46:09 +0000 (0:00:01.087) 0:00:07.006 ******** 2026-01-05 00:50:32.954696 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.954701 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.954705 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.954711 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.954719 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.954728 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.954733 | orchestrator | 2026-01-05 00:50:32.954737 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-05 00:50:32.954743 | orchestrator | Monday 05 January 2026 00:46:10 +0000 (0:00:00.670) 0:00:07.676 ******** 2026-01-05 00:50:32.954748 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.954752 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.954757 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.954762 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.954766 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.954771 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.954776 | orchestrator | 2026-01-05 00:50:32.954781 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-05 00:50:32.954785 | orchestrator | Monday 05 January 2026 00:46:11 +0000 (0:00:00.858) 0:00:08.534 ******** 2026-01-05 00:50:32.954790 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:32.954800 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:32.954805 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.954810 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:32.954814 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:32.954819 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.954824 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:32.954828 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:32.954833 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.954838 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:32.954854 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:32.954859 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.954864 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:32.954869 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:32.954874 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.954879 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:32.954883 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:32.954888 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.954893 | orchestrator | 2026-01-05 00:50:32.954898 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-05 00:50:32.954903 | orchestrator | Monday 05 January 2026 00:46:11 +0000 (0:00:00.685) 0:00:09.220 ******** 2026-01-05 00:50:32.954907 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.954912 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.954917 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.954922 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.954926 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.954931 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.954936 | orchestrator | 2026-01-05 00:50:32.954945 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-05 00:50:32.954951 | orchestrator | Monday 05 January 2026 00:46:13 +0000 (0:00:01.765) 0:00:10.986 ******** 2026-01-05 00:50:32.954956 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:32.954961 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:32.954965 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:32.954970 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.954975 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.954980 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.954984 | orchestrator | 2026-01-05 00:50:32.954989 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-05 00:50:32.954994 | orchestrator | Monday 05 January 2026 00:46:14 +0000 (0:00:01.438) 0:00:12.424 ******** 2026-01-05 00:50:32.954999 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.955004 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:32.955008 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.955013 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:32.955018 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.955023 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:32.955027 | orchestrator | 2026-01-05 00:50:32.955032 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-05 00:50:32.955037 | orchestrator | Monday 05 January 2026 00:46:20 +0000 (0:00:05.863) 0:00:18.288 ******** 2026-01-05 00:50:32.955042 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.955047 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.955051 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.955060 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955065 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955087 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955092 | orchestrator | 2026-01-05 00:50:32.955097 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-05 00:50:32.955102 | orchestrator | Monday 05 January 2026 00:46:22 +0000 (0:00:02.107) 0:00:20.395 ******** 2026-01-05 00:50:32.955107 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.955111 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.955116 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.955121 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955126 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955130 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955135 | orchestrator | 2026-01-05 00:50:32.955140 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-05 00:50:32.955147 | orchestrator | Monday 05 January 2026 00:46:25 +0000 (0:00:02.186) 0:00:22.582 ******** 2026-01-05 00:50:32.955152 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.955156 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.955161 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.955166 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955170 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955175 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955180 | orchestrator | 2026-01-05 00:50:32.955185 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-05 00:50:32.955190 | orchestrator | Monday 05 January 2026 00:46:26 +0000 (0:00:01.434) 0:00:24.017 ******** 2026-01-05 00:50:32.955194 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-05 00:50:32.955200 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-05 00:50:32.955204 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.955209 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-05 00:50:32.955214 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-05 00:50:32.955219 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.955223 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-05 00:50:32.955228 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-05 00:50:32.955233 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.955238 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-05 00:50:32.955243 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-05 00:50:32.955247 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955252 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-05 00:50:32.955257 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-05 00:50:32.955261 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955267 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-05 00:50:32.955271 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-05 00:50:32.955276 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955281 | orchestrator | 2026-01-05 00:50:32.955286 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-05 00:50:32.955295 | orchestrator | Monday 05 January 2026 00:46:27 +0000 (0:00:01.264) 0:00:25.281 ******** 2026-01-05 00:50:32.955300 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.955305 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.955309 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.955314 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955319 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955324 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955328 | orchestrator | 2026-01-05 00:50:32.955333 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-05 00:50:32.955342 | orchestrator | Monday 05 January 2026 00:46:28 +0000 (0:00:00.824) 0:00:26.105 ******** 2026-01-05 00:50:32.955347 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.955352 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.955357 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.955361 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955366 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955371 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955377 | orchestrator | 2026-01-05 00:50:32.955385 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-05 00:50:32.955392 | orchestrator | 2026-01-05 00:50:32.955402 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-05 00:50:32.955410 | orchestrator | Monday 05 January 2026 00:46:30 +0000 (0:00:01.617) 0:00:27.722 ******** 2026-01-05 00:50:32.955417 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.955428 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.955437 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.955444 | orchestrator | 2026-01-05 00:50:32.955451 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-05 00:50:32.955459 | orchestrator | Monday 05 January 2026 00:46:31 +0000 (0:00:01.713) 0:00:29.436 ******** 2026-01-05 00:50:32.955466 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.955473 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.955480 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.955488 | orchestrator | 2026-01-05 00:50:32.955495 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-05 00:50:32.955503 | orchestrator | Monday 05 January 2026 00:46:33 +0000 (0:00:01.419) 0:00:30.855 ******** 2026-01-05 00:50:32.955510 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.955517 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.955525 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.955532 | orchestrator | 2026-01-05 00:50:32.955539 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-05 00:50:32.955547 | orchestrator | Monday 05 January 2026 00:46:34 +0000 (0:00:01.327) 0:00:32.183 ******** 2026-01-05 00:50:32.955554 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.955562 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.955570 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.955578 | orchestrator | 2026-01-05 00:50:32.955586 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-05 00:50:32.955594 | orchestrator | Monday 05 January 2026 00:46:35 +0000 (0:00:00.838) 0:00:33.021 ******** 2026-01-05 00:50:32.955602 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955609 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955614 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955618 | orchestrator | 2026-01-05 00:50:32.955623 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-05 00:50:32.955628 | orchestrator | Monday 05 January 2026 00:46:35 +0000 (0:00:00.321) 0:00:33.342 ******** 2026-01-05 00:50:32.955633 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.955638 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.955643 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.955647 | orchestrator | 2026-01-05 00:50:32.955652 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-05 00:50:32.955657 | orchestrator | Monday 05 January 2026 00:46:37 +0000 (0:00:01.502) 0:00:34.845 ******** 2026-01-05 00:50:32.955662 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.955667 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.955671 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.955676 | orchestrator | 2026-01-05 00:50:32.955681 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-05 00:50:32.955686 | orchestrator | Monday 05 January 2026 00:46:38 +0000 (0:00:01.317) 0:00:36.162 ******** 2026-01-05 00:50:32.955691 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:50:32.955701 | orchestrator | 2026-01-05 00:50:32.955706 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-05 00:50:32.955711 | orchestrator | Monday 05 January 2026 00:46:39 +0000 (0:00:00.449) 0:00:36.612 ******** 2026-01-05 00:50:32.955715 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.955720 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.955725 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.955730 | orchestrator | 2026-01-05 00:50:32.955735 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-05 00:50:32.955740 | orchestrator | Monday 05 January 2026 00:46:41 +0000 (0:00:02.624) 0:00:39.236 ******** 2026-01-05 00:50:32.955746 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955751 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955756 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.955761 | orchestrator | 2026-01-05 00:50:32.955766 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-05 00:50:32.955771 | orchestrator | Monday 05 January 2026 00:46:42 +0000 (0:00:00.913) 0:00:40.149 ******** 2026-01-05 00:50:32.955776 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955781 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955786 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.955791 | orchestrator | 2026-01-05 00:50:32.955796 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-05 00:50:32.955801 | orchestrator | Monday 05 January 2026 00:46:44 +0000 (0:00:01.382) 0:00:41.532 ******** 2026-01-05 00:50:32.955806 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955811 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955817 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.955822 | orchestrator | 2026-01-05 00:50:32.955827 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-05 00:50:32.955837 | orchestrator | Monday 05 January 2026 00:46:45 +0000 (0:00:01.705) 0:00:43.238 ******** 2026-01-05 00:50:32.955842 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955847 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955852 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955857 | orchestrator | 2026-01-05 00:50:32.955862 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-05 00:50:32.955867 | orchestrator | Monday 05 January 2026 00:46:46 +0000 (0:00:00.792) 0:00:44.030 ******** 2026-01-05 00:50:32.955872 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.955877 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.955883 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.955888 | orchestrator | 2026-01-05 00:50:32.955893 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-05 00:50:32.955898 | orchestrator | Monday 05 January 2026 00:46:47 +0000 (0:00:00.561) 0:00:44.591 ******** 2026-01-05 00:50:32.955903 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.955908 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.955913 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.955918 | orchestrator | 2026-01-05 00:50:32.955923 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-05 00:50:32.955928 | orchestrator | Monday 05 January 2026 00:46:49 +0000 (0:00:02.272) 0:00:46.864 ******** 2026-01-05 00:50:32.955933 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.955938 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.955952 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.955958 | orchestrator | 2026-01-05 00:50:32.955964 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-05 00:50:32.955969 | orchestrator | Monday 05 January 2026 00:46:52 +0000 (0:00:02.891) 0:00:49.756 ******** 2026-01-05 00:50:32.955974 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.955979 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.955984 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.955989 | orchestrator | 2026-01-05 00:50:32.955994 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-05 00:50:32.956003 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:00.910) 0:00:50.667 ******** 2026-01-05 00:50:32.956009 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:50:32.956014 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:50:32.956020 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:50:32.956025 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:50:32.956030 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:50:32.956035 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:50:32.956040 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:50:32.956045 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:50:32.956050 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:50:32.956056 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:50:32.956061 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:50:32.956066 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:50:32.956103 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.956109 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.956114 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.956119 | orchestrator | 2026-01-05 00:50:32.956124 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-05 00:50:32.956129 | orchestrator | Monday 05 January 2026 00:47:36 +0000 (0:00:43.628) 0:01:34.295 ******** 2026-01-05 00:50:32.956135 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.956140 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.956145 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.956150 | orchestrator | 2026-01-05 00:50:32.956155 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-05 00:50:32.956160 | orchestrator | Monday 05 January 2026 00:47:37 +0000 (0:00:00.675) 0:01:34.971 ******** 2026-01-05 00:50:32.956165 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.956170 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.956175 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.956180 | orchestrator | 2026-01-05 00:50:32.956186 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-05 00:50:32.956191 | orchestrator | Monday 05 January 2026 00:47:38 +0000 (0:00:01.310) 0:01:36.281 ******** 2026-01-05 00:50:32.956196 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.956201 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.956206 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.956211 | orchestrator | 2026-01-05 00:50:32.956219 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-05 00:50:32.956225 | orchestrator | Monday 05 January 2026 00:47:40 +0000 (0:00:01.427) 0:01:37.709 ******** 2026-01-05 00:50:32.956230 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.956240 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.956246 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.956251 | orchestrator | 2026-01-05 00:50:32.956256 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-05 00:50:32.956261 | orchestrator | Monday 05 January 2026 00:48:06 +0000 (0:00:26.408) 0:02:04.118 ******** 2026-01-05 00:50:32.956266 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.956271 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.956276 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.956282 | orchestrator | 2026-01-05 00:50:32.956287 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-05 00:50:32.956292 | orchestrator | Monday 05 January 2026 00:48:07 +0000 (0:00:00.781) 0:02:04.899 ******** 2026-01-05 00:50:32.956297 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.956302 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.956307 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.956312 | orchestrator | 2026-01-05 00:50:32.956317 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-05 00:50:32.956322 | orchestrator | Monday 05 January 2026 00:48:08 +0000 (0:00:00.811) 0:02:05.711 ******** 2026-01-05 00:50:32.956332 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.956337 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.956342 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.956347 | orchestrator | 2026-01-05 00:50:32.956352 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-05 00:50:32.956357 | orchestrator | Monday 05 January 2026 00:48:08 +0000 (0:00:00.780) 0:02:06.491 ******** 2026-01-05 00:50:32.956362 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.956368 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.956373 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.956381 | orchestrator | 2026-01-05 00:50:32.956391 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-05 00:50:32.956399 | orchestrator | Monday 05 January 2026 00:48:09 +0000 (0:00:00.935) 0:02:07.427 ******** 2026-01-05 00:50:32.956409 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.956419 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.956429 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.956438 | orchestrator | 2026-01-05 00:50:32.956448 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-05 00:50:32.956453 | orchestrator | Monday 05 January 2026 00:48:10 +0000 (0:00:00.342) 0:02:07.770 ******** 2026-01-05 00:50:32.956459 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.956464 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.956469 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.956475 | orchestrator | 2026-01-05 00:50:32.956480 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-05 00:50:32.956485 | orchestrator | Monday 05 January 2026 00:48:10 +0000 (0:00:00.609) 0:02:08.379 ******** 2026-01-05 00:50:32.956490 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.956496 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.956501 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.956506 | orchestrator | 2026-01-05 00:50:32.956511 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-05 00:50:32.956516 | orchestrator | Monday 05 January 2026 00:48:11 +0000 (0:00:00.556) 0:02:08.936 ******** 2026-01-05 00:50:32.956521 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.956527 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.956532 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.956537 | orchestrator | 2026-01-05 00:50:32.956542 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-05 00:50:32.956547 | orchestrator | Monday 05 January 2026 00:48:12 +0000 (0:00:01.077) 0:02:10.013 ******** 2026-01-05 00:50:32.956552 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:32.956557 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:32.956563 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:32.956572 | orchestrator | 2026-01-05 00:50:32.956577 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-05 00:50:32.956583 | orchestrator | Monday 05 January 2026 00:48:13 +0000 (0:00:00.881) 0:02:10.895 ******** 2026-01-05 00:50:32.956588 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.956593 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.956598 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.956603 | orchestrator | 2026-01-05 00:50:32.956608 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-05 00:50:32.956613 | orchestrator | Monday 05 January 2026 00:48:13 +0000 (0:00:00.313) 0:02:11.208 ******** 2026-01-05 00:50:32.956618 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.956623 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.956628 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.956634 | orchestrator | 2026-01-05 00:50:32.956639 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-05 00:50:32.956644 | orchestrator | Monday 05 January 2026 00:48:13 +0000 (0:00:00.301) 0:02:11.509 ******** 2026-01-05 00:50:32.956649 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.956654 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.956659 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.956664 | orchestrator | 2026-01-05 00:50:32.956669 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-05 00:50:32.956675 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:00.902) 0:02:12.412 ******** 2026-01-05 00:50:32.956680 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.956685 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.956690 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.956695 | orchestrator | 2026-01-05 00:50:32.956700 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-05 00:50:32.956706 | orchestrator | Monday 05 January 2026 00:48:15 +0000 (0:00:00.711) 0:02:13.123 ******** 2026-01-05 00:50:32.956711 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:50:32.956721 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:50:32.956726 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:50:32.956731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:50:32.956737 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:50:32.956742 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:50:32.956747 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:50:32.956752 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:50:32.956757 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:50:32.956762 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-05 00:50:32.956767 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:50:32.956776 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:50:32.956781 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-05 00:50:32.956786 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:50:32.956791 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:50:32.956796 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:50:32.956806 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:50:32.956811 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:50:32.956816 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:50:32.956821 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:50:32.956826 | orchestrator | 2026-01-05 00:50:32.956831 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-05 00:50:32.956837 | orchestrator | 2026-01-05 00:50:32.956842 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-05 00:50:32.956847 | orchestrator | Monday 05 January 2026 00:48:18 +0000 (0:00:03.084) 0:02:16.208 ******** 2026-01-05 00:50:32.956852 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:32.956857 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:32.956863 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:32.956868 | orchestrator | 2026-01-05 00:50:32.956873 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-05 00:50:32.956878 | orchestrator | Monday 05 January 2026 00:48:19 +0000 (0:00:00.520) 0:02:16.728 ******** 2026-01-05 00:50:32.956883 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:32.956888 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:32.956893 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:32.956898 | orchestrator | 2026-01-05 00:50:32.956904 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-05 00:50:32.956909 | orchestrator | Monday 05 January 2026 00:48:20 +0000 (0:00:01.553) 0:02:18.281 ******** 2026-01-05 00:50:32.956914 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:32.956919 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:32.956924 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:32.956929 | orchestrator | 2026-01-05 00:50:32.956934 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-05 00:50:32.956939 | orchestrator | Monday 05 January 2026 00:48:21 +0000 (0:00:00.331) 0:02:18.613 ******** 2026-01-05 00:50:32.956944 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:50:32.956949 | orchestrator | 2026-01-05 00:50:32.956955 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-05 00:50:32.956960 | orchestrator | Monday 05 January 2026 00:48:21 +0000 (0:00:00.699) 0:02:19.313 ******** 2026-01-05 00:50:32.956965 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.956970 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.956975 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.956980 | orchestrator | 2026-01-05 00:50:32.956985 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-05 00:50:32.956990 | orchestrator | Monday 05 January 2026 00:48:22 +0000 (0:00:00.341) 0:02:19.654 ******** 2026-01-05 00:50:32.956995 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.957000 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.957005 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.957010 | orchestrator | 2026-01-05 00:50:32.957015 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-05 00:50:32.957021 | orchestrator | Monday 05 January 2026 00:48:22 +0000 (0:00:00.334) 0:02:19.989 ******** 2026-01-05 00:50:32.957026 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.957031 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.957036 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.957041 | orchestrator | 2026-01-05 00:50:32.957046 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-05 00:50:32.957051 | orchestrator | Monday 05 January 2026 00:48:22 +0000 (0:00:00.347) 0:02:20.337 ******** 2026-01-05 00:50:32.957056 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:32.957066 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:32.957096 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:32.957102 | orchestrator | 2026-01-05 00:50:32.957112 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-05 00:50:32.957117 | orchestrator | Monday 05 January 2026 00:48:23 +0000 (0:00:00.931) 0:02:21.269 ******** 2026-01-05 00:50:32.957122 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:32.957127 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:32.957133 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:32.957138 | orchestrator | 2026-01-05 00:50:32.957143 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-05 00:50:32.957148 | orchestrator | Monday 05 January 2026 00:48:24 +0000 (0:00:01.206) 0:02:22.475 ******** 2026-01-05 00:50:32.957153 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:32.957158 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:32.957163 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:32.957168 | orchestrator | 2026-01-05 00:50:32.957173 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-05 00:50:32.957178 | orchestrator | Monday 05 January 2026 00:48:26 +0000 (0:00:01.356) 0:02:23.831 ******** 2026-01-05 00:50:32.957183 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:32.957188 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:32.957193 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:32.957198 | orchestrator | 2026-01-05 00:50:32.957203 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-05 00:50:32.957208 | orchestrator | 2026-01-05 00:50:32.957214 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-05 00:50:32.957219 | orchestrator | Monday 05 January 2026 00:48:37 +0000 (0:00:11.205) 0:02:35.037 ******** 2026-01-05 00:50:32.957224 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:32.957229 | orchestrator | 2026-01-05 00:50:32.957234 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-05 00:50:32.957239 | orchestrator | Monday 05 January 2026 00:48:38 +0000 (0:00:00.902) 0:02:35.940 ******** 2026-01-05 00:50:32.957244 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.957249 | orchestrator | 2026-01-05 00:50:32.957254 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:50:32.957259 | orchestrator | Monday 05 January 2026 00:48:38 +0000 (0:00:00.473) 0:02:36.413 ******** 2026-01-05 00:50:32.957264 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:50:32.957270 | orchestrator | 2026-01-05 00:50:32.957275 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:50:32.957280 | orchestrator | Monday 05 January 2026 00:48:39 +0000 (0:00:00.555) 0:02:36.968 ******** 2026-01-05 00:50:32.957285 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.957290 | orchestrator | 2026-01-05 00:50:32.957295 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-05 00:50:32.957300 | orchestrator | Monday 05 January 2026 00:48:40 +0000 (0:00:01.169) 0:02:38.138 ******** 2026-01-05 00:50:32.957305 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.957310 | orchestrator | 2026-01-05 00:50:32.957315 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-05 00:50:32.957320 | orchestrator | Monday 05 January 2026 00:48:41 +0000 (0:00:00.581) 0:02:38.720 ******** 2026-01-05 00:50:32.957325 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:50:32.957330 | orchestrator | 2026-01-05 00:50:32.957335 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-05 00:50:32.957341 | orchestrator | Monday 05 January 2026 00:48:42 +0000 (0:00:01.621) 0:02:40.341 ******** 2026-01-05 00:50:32.957346 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:50:32.957351 | orchestrator | 2026-01-05 00:50:32.957356 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-05 00:50:32.957361 | orchestrator | Monday 05 January 2026 00:48:43 +0000 (0:00:00.842) 0:02:41.183 ******** 2026-01-05 00:50:32.957372 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.957379 | orchestrator | 2026-01-05 00:50:32.957388 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-05 00:50:32.957396 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.391) 0:02:41.575 ******** 2026-01-05 00:50:32.957405 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.957414 | orchestrator | 2026-01-05 00:50:32.957423 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-05 00:50:32.957433 | orchestrator | 2026-01-05 00:50:32.957442 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-05 00:50:32.957450 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.573) 0:02:42.148 ******** 2026-01-05 00:50:32.957459 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:32.957465 | orchestrator | 2026-01-05 00:50:32.957470 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-05 00:50:32.957475 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.124) 0:02:42.273 ******** 2026-01-05 00:50:32.957480 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:50:32.957485 | orchestrator | 2026-01-05 00:50:32.957491 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-05 00:50:32.957496 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.211) 0:02:42.484 ******** 2026-01-05 00:50:32.957501 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:32.957506 | orchestrator | 2026-01-05 00:50:32.957511 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-05 00:50:32.957516 | orchestrator | Monday 05 January 2026 00:48:45 +0000 (0:00:00.843) 0:02:43.328 ******** 2026-01-05 00:50:32.957521 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:32.957526 | orchestrator | 2026-01-05 00:50:32.957531 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-05 00:50:32.957536 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:01.382) 0:02:44.711 ******** 2026-01-05 00:50:32.957541 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.957547 | orchestrator | 2026-01-05 00:50:32.957552 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-05 00:50:32.957557 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:00.790) 0:02:45.501 ******** 2026-01-05 00:50:32.957562 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:32.957567 | orchestrator | 2026-01-05 00:50:32.957577 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-05 00:50:32.957582 | orchestrator | Monday 05 January 2026 00:48:48 +0000 (0:00:00.423) 0:02:45.926 ******** 2026-01-05 00:50:32.957587 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.957593 | orchestrator | 2026-01-05 00:50:32.957598 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-05 00:50:32.957603 | orchestrator | Monday 05 January 2026 00:48:54 +0000 (0:00:06.186) 0:02:52.112 ******** 2026-01-05 00:50:32.957609 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.957614 | orchestrator | 2026-01-05 00:50:32.957619 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-05 00:50:32.957624 | orchestrator | Monday 05 January 2026 00:49:09 +0000 (0:00:14.603) 0:03:06.716 ******** 2026-01-05 00:50:32.957679 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:32.957693 | orchestrator | 2026-01-05 00:50:32.957699 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-05 00:50:32.957704 | orchestrator | 2026-01-05 00:50:32.957709 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-05 00:50:32.957714 | orchestrator | Monday 05 January 2026 00:49:09 +0000 (0:00:00.579) 0:03:07.295 ******** 2026-01-05 00:50:32.957719 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.957724 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.957729 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.957734 | orchestrator | 2026-01-05 00:50:32.957743 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-05 00:50:32.957755 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:00.322) 0:03:07.617 ******** 2026-01-05 00:50:32.957761 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.957766 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.957771 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.957776 | orchestrator | 2026-01-05 00:50:32.957781 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-05 00:50:32.957786 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:00.537) 0:03:08.155 ******** 2026-01-05 00:50:32.957791 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:50:32.957797 | orchestrator | 2026-01-05 00:50:32.957802 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-05 00:50:32.957807 | orchestrator | Monday 05 January 2026 00:49:11 +0000 (0:00:00.619) 0:03:08.775 ******** 2026-01-05 00:50:32.957812 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:50:32.957817 | orchestrator | 2026-01-05 00:50:32.957822 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-05 00:50:32.957827 | orchestrator | Monday 05 January 2026 00:49:12 +0000 (0:00:01.053) 0:03:09.829 ******** 2026-01-05 00:50:32.957832 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:50:32.957838 | orchestrator | 2026-01-05 00:50:32.957843 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-05 00:50:32.957848 | orchestrator | Monday 05 January 2026 00:49:12 +0000 (0:00:00.656) 0:03:10.485 ******** 2026-01-05 00:50:32.957853 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.957858 | orchestrator | 2026-01-05 00:50:32.957863 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-05 00:50:32.957868 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.098) 0:03:10.584 ******** 2026-01-05 00:50:32.957873 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:50:32.957878 | orchestrator | 2026-01-05 00:50:32.957883 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-05 00:50:32.957889 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.793) 0:03:11.378 ******** 2026-01-05 00:50:32.957893 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.957899 | orchestrator | 2026-01-05 00:50:32.957904 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-05 00:50:32.957909 | orchestrator | Monday 05 January 2026 00:49:14 +0000 (0:00:00.143) 0:03:11.521 ******** 2026-01-05 00:50:32.957914 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.957919 | orchestrator | 2026-01-05 00:50:32.957924 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-05 00:50:32.957929 | orchestrator | Monday 05 January 2026 00:49:14 +0000 (0:00:00.089) 0:03:11.610 ******** 2026-01-05 00:50:32.957936 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.957945 | orchestrator | 2026-01-05 00:50:32.957954 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-05 00:50:32.957964 | orchestrator | Monday 05 January 2026 00:49:14 +0000 (0:00:00.091) 0:03:11.702 ******** 2026-01-05 00:50:32.957974 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.957983 | orchestrator | 2026-01-05 00:50:32.957993 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-05 00:50:32.958001 | orchestrator | Monday 05 January 2026 00:49:14 +0000 (0:00:00.088) 0:03:11.791 ******** 2026-01-05 00:50:32.958006 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:50:32.958040 | orchestrator | 2026-01-05 00:50:32.958046 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-05 00:50:32.958052 | orchestrator | Monday 05 January 2026 00:49:19 +0000 (0:00:04.969) 0:03:16.761 ******** 2026-01-05 00:50:32.958060 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-05 00:50:32.958065 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-05 00:50:32.958130 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-05 00:50:32.958140 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-05 00:50:32.958150 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-05 00:50:32.958160 | orchestrator | 2026-01-05 00:50:32.958178 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-05 00:50:32.958187 | orchestrator | Monday 05 January 2026 00:50:02 +0000 (0:00:43.020) 0:03:59.782 ******** 2026-01-05 00:50:32.958201 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:50:32.958208 | orchestrator | 2026-01-05 00:50:32.958213 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-05 00:50:32.958219 | orchestrator | Monday 05 January 2026 00:50:03 +0000 (0:00:01.228) 0:04:01.010 ******** 2026-01-05 00:50:32.958225 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:50:32.958230 | orchestrator | 2026-01-05 00:50:32.958236 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-05 00:50:32.958241 | orchestrator | Monday 05 January 2026 00:50:04 +0000 (0:00:01.474) 0:04:02.485 ******** 2026-01-05 00:50:32.958247 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:50:32.958252 | orchestrator | 2026-01-05 00:50:32.958258 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-05 00:50:32.958263 | orchestrator | Monday 05 January 2026 00:50:06 +0000 (0:00:01.299) 0:04:03.785 ******** 2026-01-05 00:50:32.958268 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.958274 | orchestrator | 2026-01-05 00:50:32.958279 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-05 00:50:32.958285 | orchestrator | Monday 05 January 2026 00:50:06 +0000 (0:00:00.142) 0:04:03.928 ******** 2026-01-05 00:50:32.958290 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-05 00:50:32.958301 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-05 00:50:32.958307 | orchestrator | 2026-01-05 00:50:32.958313 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-05 00:50:32.958318 | orchestrator | Monday 05 January 2026 00:50:08 +0000 (0:00:02.147) 0:04:06.075 ******** 2026-01-05 00:50:32.958323 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.958330 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.958339 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.958348 | orchestrator | 2026-01-05 00:50:32.958357 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-05 00:50:32.958366 | orchestrator | Monday 05 January 2026 00:50:09 +0000 (0:00:00.463) 0:04:06.539 ******** 2026-01-05 00:50:32.958374 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.958383 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.958392 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.958400 | orchestrator | 2026-01-05 00:50:32.958408 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-05 00:50:32.958417 | orchestrator | 2026-01-05 00:50:32.958426 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-05 00:50:32.958436 | orchestrator | Monday 05 January 2026 00:50:10 +0000 (0:00:01.122) 0:04:07.662 ******** 2026-01-05 00:50:32.958445 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:32.958455 | orchestrator | 2026-01-05 00:50:32.958463 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-05 00:50:32.958472 | orchestrator | Monday 05 January 2026 00:50:10 +0000 (0:00:00.153) 0:04:07.816 ******** 2026-01-05 00:50:32.958481 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:50:32.958491 | orchestrator | 2026-01-05 00:50:32.958499 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-05 00:50:32.958509 | orchestrator | Monday 05 January 2026 00:50:10 +0000 (0:00:00.246) 0:04:08.063 ******** 2026-01-05 00:50:32.958525 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:32.958535 | orchestrator | 2026-01-05 00:50:32.958541 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-05 00:50:32.958546 | orchestrator | 2026-01-05 00:50:32.958552 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-05 00:50:32.958557 | orchestrator | Monday 05 January 2026 00:50:16 +0000 (0:00:06.171) 0:04:14.234 ******** 2026-01-05 00:50:32.958562 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:32.958568 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:32.958573 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:32.958579 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:32.958584 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:32.958589 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:32.958595 | orchestrator | 2026-01-05 00:50:32.958600 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-05 00:50:32.958606 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:00.679) 0:04:14.913 ******** 2026-01-05 00:50:32.958611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:50:32.958617 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:50:32.958623 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:50:32.958628 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:50:32.958633 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:50:32.958639 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:50:32.958644 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:50:32.958649 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:50:32.958655 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:50:32.958660 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:50:32.958666 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:50:32.958671 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:50:32.958682 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:50:32.958688 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:50:32.958694 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:50:32.958699 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:50:32.958705 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:50:32.958710 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:50:32.958716 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:50:32.958721 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:50:32.958726 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:50:32.958732 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:50:32.958737 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:50:32.958747 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:50:32.958753 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:50:32.958762 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:50:32.958768 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:50:32.958774 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:50:32.958779 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:50:32.958785 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:50:32.958790 | orchestrator | 2026-01-05 00:50:32.958795 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-05 00:50:32.958801 | orchestrator | Monday 05 January 2026 00:50:29 +0000 (0:00:12.338) 0:04:27.252 ******** 2026-01-05 00:50:32.958806 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.958812 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.958817 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.958823 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.958828 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.958834 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.958839 | orchestrator | 2026-01-05 00:50:32.958845 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-05 00:50:32.958851 | orchestrator | Monday 05 January 2026 00:50:30 +0000 (0:00:00.629) 0:04:27.882 ******** 2026-01-05 00:50:32.958856 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:32.958862 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:32.958867 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:32.958873 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:32.958878 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:32.958884 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:32.958889 | orchestrator | 2026-01-05 00:50:32.958895 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:50:32.958900 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:50:32.958907 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-05 00:50:32.958913 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:50:32.958919 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:50:32.958924 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:50:32.958930 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:50:32.958935 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:50:32.958941 | orchestrator | 2026-01-05 00:50:32.958946 | orchestrator | 2026-01-05 00:50:32.958952 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:50:32.958957 | orchestrator | Monday 05 January 2026 00:50:30 +0000 (0:00:00.452) 0:04:28.335 ******** 2026-01-05 00:50:32.958963 | orchestrator | =============================================================================== 2026-01-05 00:50:32.958969 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.63s 2026-01-05 00:50:32.958974 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.02s 2026-01-05 00:50:32.958980 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.41s 2026-01-05 00:50:32.958993 | orchestrator | kubectl : Install required packages ------------------------------------ 14.60s 2026-01-05 00:50:32.958999 | orchestrator | Manage labels ---------------------------------------------------------- 12.34s 2026-01-05 00:50:32.959005 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.21s 2026-01-05 00:50:32.959010 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.19s 2026-01-05 00:50:32.959016 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.17s 2026-01-05 00:50:32.959021 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.86s 2026-01-05 00:50:32.959027 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.97s 2026-01-05 00:50:32.959032 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.08s 2026-01-05 00:50:32.959037 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.89s 2026-01-05 00:50:32.959043 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.62s 2026-01-05 00:50:32.959048 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.27s 2026-01-05 00:50:32.959054 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.19s 2026-01-05 00:50:32.959063 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.19s 2026-01-05 00:50:32.959109 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.15s 2026-01-05 00:50:32.959119 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.11s 2026-01-05 00:50:32.959124 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.77s 2026-01-05 00:50:32.959129 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.71s 2026-01-05 00:50:32.959135 | orchestrator | 2026-01-05 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:36.005052 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task aa59028d-85f9-4abb-bd4a-bd37a96643e9 is in state STARTED 2026-01-05 00:50:36.009016 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:36.009737 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:36.012188 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 78a20174-eb9b-4159-9b1b-80e91cb1b08f is in state STARTED 2026-01-05 00:50:36.013252 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:36.014363 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:36.014420 | orchestrator | 2026-01-05 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:39.058949 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task aa59028d-85f9-4abb-bd4a-bd37a96643e9 is in state STARTED 2026-01-05 00:50:39.059044 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:39.061681 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:39.063259 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 78a20174-eb9b-4159-9b1b-80e91cb1b08f is in state STARTED 2026-01-05 00:50:39.065622 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:39.067482 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:39.067519 | orchestrator | 2026-01-05 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:42.113997 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task aa59028d-85f9-4abb-bd4a-bd37a96643e9 is in state SUCCESS 2026-01-05 00:50:42.114253 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:42.115796 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:42.117954 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 78a20174-eb9b-4159-9b1b-80e91cb1b08f is in state STARTED 2026-01-05 00:50:42.119268 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:42.121185 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:42.121235 | orchestrator | 2026-01-05 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:45.171426 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:45.172207 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:45.172943 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 78a20174-eb9b-4159-9b1b-80e91cb1b08f is in state SUCCESS 2026-01-05 00:50:45.174248 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:45.175197 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:45.175228 | orchestrator | 2026-01-05 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:48.237468 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:48.240421 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:48.242887 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:48.245443 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:48.245480 | orchestrator | 2026-01-05 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:51.297803 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:51.300371 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:51.302661 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:51.304905 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:51.304944 | orchestrator | 2026-01-05 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:54.374180 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:54.377281 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:54.380667 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:54.384895 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:54.385016 | orchestrator | 2026-01-05 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:57.429935 | orchestrator | 2026-01-05 00:50:57 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:50:57.430216 | orchestrator | 2026-01-05 00:50:57 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:50:57.432369 | orchestrator | 2026-01-05 00:50:57 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:50:57.433210 | orchestrator | 2026-01-05 00:50:57 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:50:57.433249 | orchestrator | 2026-01-05 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:00.484175 | orchestrator | 2026-01-05 00:51:00 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:00.484654 | orchestrator | 2026-01-05 00:51:00 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:00.485882 | orchestrator | 2026-01-05 00:51:00 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:00.486980 | orchestrator | 2026-01-05 00:51:00 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:00.487241 | orchestrator | 2026-01-05 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:03.529857 | orchestrator | 2026-01-05 00:51:03 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:03.530382 | orchestrator | 2026-01-05 00:51:03 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:03.532472 | orchestrator | 2026-01-05 00:51:03 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:03.533121 | orchestrator | 2026-01-05 00:51:03 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:03.533167 | orchestrator | 2026-01-05 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:06.632375 | orchestrator | 2026-01-05 00:51:06 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:06.632689 | orchestrator | 2026-01-05 00:51:06 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:06.636319 | orchestrator | 2026-01-05 00:51:06 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:06.640196 | orchestrator | 2026-01-05 00:51:06 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:06.640255 | orchestrator | 2026-01-05 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:09.685272 | orchestrator | 2026-01-05 00:51:09 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:09.685386 | orchestrator | 2026-01-05 00:51:09 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:09.686473 | orchestrator | 2026-01-05 00:51:09 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:09.687179 | orchestrator | 2026-01-05 00:51:09 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:09.687236 | orchestrator | 2026-01-05 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:12.757983 | orchestrator | 2026-01-05 00:51:12 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:12.759247 | orchestrator | 2026-01-05 00:51:12 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:12.761405 | orchestrator | 2026-01-05 00:51:12 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:12.765210 | orchestrator | 2026-01-05 00:51:12 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:12.765303 | orchestrator | 2026-01-05 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:15.837785 | orchestrator | 2026-01-05 00:51:15 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:15.838251 | orchestrator | 2026-01-05 00:51:15 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:15.839184 | orchestrator | 2026-01-05 00:51:15 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:15.839903 | orchestrator | 2026-01-05 00:51:15 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:15.840183 | orchestrator | 2026-01-05 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:18.879761 | orchestrator | 2026-01-05 00:51:18 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:18.880997 | orchestrator | 2026-01-05 00:51:18 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:18.881474 | orchestrator | 2026-01-05 00:51:18 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:18.884620 | orchestrator | 2026-01-05 00:51:18 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:18.884684 | orchestrator | 2026-01-05 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:21.918579 | orchestrator | 2026-01-05 00:51:21 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:21.922431 | orchestrator | 2026-01-05 00:51:21 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:21.925461 | orchestrator | 2026-01-05 00:51:21 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:21.928243 | orchestrator | 2026-01-05 00:51:21 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:21.928552 | orchestrator | 2026-01-05 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:24.972860 | orchestrator | 2026-01-05 00:51:24 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:24.973667 | orchestrator | 2026-01-05 00:51:24 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:24.974372 | orchestrator | 2026-01-05 00:51:24 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:24.976493 | orchestrator | 2026-01-05 00:51:24 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:24.976534 | orchestrator | 2026-01-05 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:28.025930 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:28.026045 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:28.027869 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:28.029634 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:28.029662 | orchestrator | 2026-01-05 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:31.071292 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:31.072223 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:31.074761 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:31.076219 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:31.076280 | orchestrator | 2026-01-05 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:34.114776 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:34.115726 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:34.116944 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:34.117825 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:34.117859 | orchestrator | 2026-01-05 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:37.192149 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:37.192212 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:37.192220 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:37.192226 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:37.192232 | orchestrator | 2026-01-05 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:40.225281 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:40.225339 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:40.225346 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:40.225351 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:40.225362 | orchestrator | 2026-01-05 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:43.264083 | orchestrator | 2026-01-05 00:51:43 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:43.265692 | orchestrator | 2026-01-05 00:51:43 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:43.269236 | orchestrator | 2026-01-05 00:51:43 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:43.272318 | orchestrator | 2026-01-05 00:51:43 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:43.272931 | orchestrator | 2026-01-05 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:46.324274 | orchestrator | 2026-01-05 00:51:46 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:46.325499 | orchestrator | 2026-01-05 00:51:46 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:46.327695 | orchestrator | 2026-01-05 00:51:46 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:46.330397 | orchestrator | 2026-01-05 00:51:46 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:46.330446 | orchestrator | 2026-01-05 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:49.378468 | orchestrator | 2026-01-05 00:51:49 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:49.379451 | orchestrator | 2026-01-05 00:51:49 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:49.381122 | orchestrator | 2026-01-05 00:51:49 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:49.382708 | orchestrator | 2026-01-05 00:51:49 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:49.382744 | orchestrator | 2026-01-05 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:52.426110 | orchestrator | 2026-01-05 00:51:52 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:52.426190 | orchestrator | 2026-01-05 00:51:52 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:52.426813 | orchestrator | 2026-01-05 00:51:52 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:52.428370 | orchestrator | 2026-01-05 00:51:52 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:52.428391 | orchestrator | 2026-01-05 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:55.463883 | orchestrator | 2026-01-05 00:51:55 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:55.464145 | orchestrator | 2026-01-05 00:51:55 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:55.464910 | orchestrator | 2026-01-05 00:51:55 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:55.465904 | orchestrator | 2026-01-05 00:51:55 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:55.465977 | orchestrator | 2026-01-05 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:58.496780 | orchestrator | 2026-01-05 00:51:58 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:51:58.498523 | orchestrator | 2026-01-05 00:51:58 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:51:58.499883 | orchestrator | 2026-01-05 00:51:58 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:51:58.501189 | orchestrator | 2026-01-05 00:51:58 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:51:58.501233 | orchestrator | 2026-01-05 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:01.565494 | orchestrator | 2026-01-05 00:52:01 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:01.567904 | orchestrator | 2026-01-05 00:52:01 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:01.569724 | orchestrator | 2026-01-05 00:52:01 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:01.572191 | orchestrator | 2026-01-05 00:52:01 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:52:01.572238 | orchestrator | 2026-01-05 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:04.628699 | orchestrator | 2026-01-05 00:52:04 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:04.632169 | orchestrator | 2026-01-05 00:52:04 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:04.634532 | orchestrator | 2026-01-05 00:52:04 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:04.636891 | orchestrator | 2026-01-05 00:52:04 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:52:04.636984 | orchestrator | 2026-01-05 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:07.664715 | orchestrator | 2026-01-05 00:52:07 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:07.664957 | orchestrator | 2026-01-05 00:52:07 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:07.666048 | orchestrator | 2026-01-05 00:52:07 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:07.666736 | orchestrator | 2026-01-05 00:52:07 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:52:07.666762 | orchestrator | 2026-01-05 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:10.698963 | orchestrator | 2026-01-05 00:52:10 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:10.702151 | orchestrator | 2026-01-05 00:52:10 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:10.702428 | orchestrator | 2026-01-05 00:52:10 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:10.703295 | orchestrator | 2026-01-05 00:52:10 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:52:10.703361 | orchestrator | 2026-01-05 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:13.743464 | orchestrator | 2026-01-05 00:52:13 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:13.745292 | orchestrator | 2026-01-05 00:52:13 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:13.747528 | orchestrator | 2026-01-05 00:52:13 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:13.749499 | orchestrator | 2026-01-05 00:52:13 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:52:13.750010 | orchestrator | 2026-01-05 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:16.795178 | orchestrator | 2026-01-05 00:52:16 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:16.803602 | orchestrator | 2026-01-05 00:52:16 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:16.804540 | orchestrator | 2026-01-05 00:52:16 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:16.806295 | orchestrator | 2026-01-05 00:52:16 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:52:16.807738 | orchestrator | 2026-01-05 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:19.845302 | orchestrator | 2026-01-05 00:52:19 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:19.845539 | orchestrator | 2026-01-05 00:52:19 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:19.847510 | orchestrator | 2026-01-05 00:52:19 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:19.848387 | orchestrator | 2026-01-05 00:52:19 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state STARTED 2026-01-05 00:52:19.848423 | orchestrator | 2026-01-05 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:22.916564 | orchestrator | 2026-01-05 00:52:22.916679 | orchestrator | 2026-01-05 00:52:22.916689 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-05 00:52:22.916699 | orchestrator | 2026-01-05 00:52:22.916711 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:52:22.916750 | orchestrator | Monday 05 January 2026 00:50:36 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-01-05 00:52:22.916764 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:52:22.916777 | orchestrator | 2026-01-05 00:52:22.916790 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:52:22.916803 | orchestrator | Monday 05 January 2026 00:50:37 +0000 (0:00:00.840) 0:00:01.101 ******** 2026-01-05 00:52:22.916816 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:22.916829 | orchestrator | 2026-01-05 00:52:22.916842 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-05 00:52:22.916866 | orchestrator | Monday 05 January 2026 00:50:38 +0000 (0:00:01.363) 0:00:02.464 ******** 2026-01-05 00:52:22.916874 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:22.916924 | orchestrator | 2026-01-05 00:52:22.916933 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:52:22.916940 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:52:22.916949 | orchestrator | 2026-01-05 00:52:22.916957 | orchestrator | 2026-01-05 00:52:22.916964 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:52:22.916971 | orchestrator | Monday 05 January 2026 00:50:39 +0000 (0:00:00.556) 0:00:03.020 ******** 2026-01-05 00:52:22.916979 | orchestrator | =============================================================================== 2026-01-05 00:52:22.916986 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.36s 2026-01-05 00:52:22.916993 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.84s 2026-01-05 00:52:22.917001 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.56s 2026-01-05 00:52:22.917008 | orchestrator | 2026-01-05 00:52:22.917015 | orchestrator | 2026-01-05 00:52:22.917024 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-05 00:52:22.917031 | orchestrator | 2026-01-05 00:52:22.917038 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-05 00:52:22.917045 | orchestrator | Monday 05 January 2026 00:50:36 +0000 (0:00:00.216) 0:00:00.216 ******** 2026-01-05 00:52:22.917053 | orchestrator | ok: [testbed-manager] 2026-01-05 00:52:22.917061 | orchestrator | 2026-01-05 00:52:22.917068 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-05 00:52:22.917075 | orchestrator | Monday 05 January 2026 00:50:36 +0000 (0:00:00.592) 0:00:00.809 ******** 2026-01-05 00:52:22.917082 | orchestrator | ok: [testbed-manager] 2026-01-05 00:52:22.917090 | orchestrator | 2026-01-05 00:52:22.917098 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:52:22.917106 | orchestrator | Monday 05 January 2026 00:50:37 +0000 (0:00:00.657) 0:00:01.467 ******** 2026-01-05 00:52:22.917115 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:52:22.917123 | orchestrator | 2026-01-05 00:52:22.917131 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:52:22.917140 | orchestrator | Monday 05 January 2026 00:50:38 +0000 (0:00:00.797) 0:00:02.264 ******** 2026-01-05 00:52:22.917148 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:22.917156 | orchestrator | 2026-01-05 00:52:22.917165 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-05 00:52:22.917173 | orchestrator | Monday 05 January 2026 00:50:40 +0000 (0:00:01.752) 0:00:04.017 ******** 2026-01-05 00:52:22.917181 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:22.917190 | orchestrator | 2026-01-05 00:52:22.917198 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-05 00:52:22.917206 | orchestrator | Monday 05 January 2026 00:50:40 +0000 (0:00:00.645) 0:00:04.663 ******** 2026-01-05 00:52:22.917214 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:52:22.917223 | orchestrator | 2026-01-05 00:52:22.917232 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-05 00:52:22.917248 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:01.714) 0:00:06.377 ******** 2026-01-05 00:52:22.917255 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:52:22.917262 | orchestrator | 2026-01-05 00:52:22.917270 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-05 00:52:22.917289 | orchestrator | Monday 05 January 2026 00:50:43 +0000 (0:00:00.905) 0:00:07.282 ******** 2026-01-05 00:52:22.917297 | orchestrator | ok: [testbed-manager] 2026-01-05 00:52:22.917304 | orchestrator | 2026-01-05 00:52:22.917311 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-05 00:52:22.917318 | orchestrator | Monday 05 January 2026 00:50:43 +0000 (0:00:00.434) 0:00:07.717 ******** 2026-01-05 00:52:22.917325 | orchestrator | ok: [testbed-manager] 2026-01-05 00:52:22.917333 | orchestrator | 2026-01-05 00:52:22.917340 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:52:22.917347 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:52:22.917354 | orchestrator | 2026-01-05 00:52:22.917361 | orchestrator | 2026-01-05 00:52:22.917368 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:52:22.917375 | orchestrator | Monday 05 January 2026 00:50:44 +0000 (0:00:00.376) 0:00:08.093 ******** 2026-01-05 00:52:22.917383 | orchestrator | =============================================================================== 2026-01-05 00:52:22.917390 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.75s 2026-01-05 00:52:22.917396 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.71s 2026-01-05 00:52:22.917407 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.91s 2026-01-05 00:52:22.917436 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-01-05 00:52:22.917445 | orchestrator | Create .kube directory -------------------------------------------------- 0.66s 2026-01-05 00:52:22.917452 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.65s 2026-01-05 00:52:22.917459 | orchestrator | Get home directory of operator user ------------------------------------- 0.59s 2026-01-05 00:52:22.917466 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.43s 2026-01-05 00:52:22.917473 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.38s 2026-01-05 00:52:22.917481 | orchestrator | 2026-01-05 00:52:22.917488 | orchestrator | 2026-01-05 00:52:22.917495 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-05 00:52:22.917502 | orchestrator | 2026-01-05 00:52:22.917509 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-05 00:52:22.917516 | orchestrator | Monday 05 January 2026 00:49:14 +0000 (0:00:00.168) 0:00:00.168 ******** 2026-01-05 00:52:22.917524 | orchestrator | ok: [localhost] => { 2026-01-05 00:52:22.917532 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-05 00:52:22.917540 | orchestrator | } 2026-01-05 00:52:22.917547 | orchestrator | 2026-01-05 00:52:22.917554 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-05 00:52:22.917562 | orchestrator | Monday 05 January 2026 00:49:14 +0000 (0:00:00.051) 0:00:00.220 ******** 2026-01-05 00:52:22.917570 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-05 00:52:22.917579 | orchestrator | ...ignoring 2026-01-05 00:52:22.917587 | orchestrator | 2026-01-05 00:52:22.917594 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-05 00:52:22.917602 | orchestrator | Monday 05 January 2026 00:49:17 +0000 (0:00:03.519) 0:00:03.739 ******** 2026-01-05 00:52:22.917609 | orchestrator | skipping: [localhost] 2026-01-05 00:52:22.917616 | orchestrator | 2026-01-05 00:52:22.917623 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-05 00:52:22.917636 | orchestrator | Monday 05 January 2026 00:49:17 +0000 (0:00:00.046) 0:00:03.786 ******** 2026-01-05 00:52:22.917643 | orchestrator | ok: [localhost] 2026-01-05 00:52:22.917650 | orchestrator | 2026-01-05 00:52:22.917658 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:52:22.917665 | orchestrator | 2026-01-05 00:52:22.917672 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:52:22.917679 | orchestrator | Monday 05 January 2026 00:49:17 +0000 (0:00:00.168) 0:00:03.954 ******** 2026-01-05 00:52:22.917686 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:22.917693 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:22.917701 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:22.917708 | orchestrator | 2026-01-05 00:52:22.917715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:52:22.917722 | orchestrator | Monday 05 January 2026 00:49:18 +0000 (0:00:00.299) 0:00:04.254 ******** 2026-01-05 00:52:22.917729 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-05 00:52:22.917737 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-05 00:52:22.917744 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-05 00:52:22.917751 | orchestrator | 2026-01-05 00:52:22.917758 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-05 00:52:22.917765 | orchestrator | 2026-01-05 00:52:22.917773 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:52:22.917780 | orchestrator | Monday 05 January 2026 00:49:18 +0000 (0:00:00.580) 0:00:04.834 ******** 2026-01-05 00:52:22.917788 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:52:22.917795 | orchestrator | 2026-01-05 00:52:22.917802 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 00:52:22.917809 | orchestrator | Monday 05 January 2026 00:49:19 +0000 (0:00:00.809) 0:00:05.644 ******** 2026-01-05 00:52:22.917816 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:22.917823 | orchestrator | 2026-01-05 00:52:22.917830 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-05 00:52:22.917838 | orchestrator | Monday 05 January 2026 00:49:20 +0000 (0:00:01.188) 0:00:06.833 ******** 2026-01-05 00:52:22.917845 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.917852 | orchestrator | 2026-01-05 00:52:22.917864 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-05 00:52:22.917875 | orchestrator | Monday 05 January 2026 00:49:21 +0000 (0:00:00.298) 0:00:07.132 ******** 2026-01-05 00:52:22.917906 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.918007 | orchestrator | 2026-01-05 00:52:22.918164 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-05 00:52:22.918178 | orchestrator | Monday 05 January 2026 00:49:21 +0000 (0:00:00.371) 0:00:07.503 ******** 2026-01-05 00:52:22.918190 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.918202 | orchestrator | 2026-01-05 00:52:22.918214 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-05 00:52:22.918227 | orchestrator | Monday 05 January 2026 00:49:21 +0000 (0:00:00.293) 0:00:07.797 ******** 2026-01-05 00:52:22.918240 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.918252 | orchestrator | 2026-01-05 00:52:22.918264 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:52:22.918276 | orchestrator | Monday 05 January 2026 00:49:22 +0000 (0:00:00.730) 0:00:08.527 ******** 2026-01-05 00:52:22.918289 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:52:22.918297 | orchestrator | 2026-01-05 00:52:22.918304 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 00:52:22.918324 | orchestrator | Monday 05 January 2026 00:49:23 +0000 (0:00:00.841) 0:00:09.369 ******** 2026-01-05 00:52:22.918341 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:22.918348 | orchestrator | 2026-01-05 00:52:22.918355 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-05 00:52:22.918362 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.835) 0:00:10.204 ******** 2026-01-05 00:52:22.918369 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.918376 | orchestrator | 2026-01-05 00:52:22.918384 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-05 00:52:22.918391 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.252) 0:00:10.457 ******** 2026-01-05 00:52:22.918398 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.918405 | orchestrator | 2026-01-05 00:52:22.918412 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-05 00:52:22.918419 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.251) 0:00:10.708 ******** 2026-01-05 00:52:22.918431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.918448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.918469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.918491 | orchestrator | 2026-01-05 00:52:22.918504 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-05 00:52:22.918517 | orchestrator | Monday 05 January 2026 00:49:25 +0000 (0:00:00.942) 0:00:11.651 ******** 2026-01-05 00:52:22.918532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.918541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.918549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.918557 | orchestrator | 2026-01-05 00:52:22.918568 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-05 00:52:22.918576 | orchestrator | Monday 05 January 2026 00:49:28 +0000 (0:00:03.122) 0:00:14.773 ******** 2026-01-05 00:52:22.918583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:52:22.918590 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:52:22.918605 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:52:22.918617 | orchestrator | 2026-01-05 00:52:22.918629 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-05 00:52:22.918641 | orchestrator | Monday 05 January 2026 00:49:30 +0000 (0:00:01.540) 0:00:16.314 ******** 2026-01-05 00:52:22.918653 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:52:22.918665 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:52:22.918676 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:52:22.918688 | orchestrator | 2026-01-05 00:52:22.918703 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-05 00:52:22.918729 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:02.539) 0:00:18.854 ******** 2026-01-05 00:52:22.918741 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:52:22.918753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:52:22.918765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:52:22.918777 | orchestrator | 2026-01-05 00:52:22.918787 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-05 00:52:22.918797 | orchestrator | Monday 05 January 2026 00:49:34 +0000 (0:00:01.471) 0:00:20.325 ******** 2026-01-05 00:52:22.918809 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:52:22.918821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:52:22.918833 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:52:22.918845 | orchestrator | 2026-01-05 00:52:22.918857 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-05 00:52:22.918869 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:01.980) 0:00:22.305 ******** 2026-01-05 00:52:22.918907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:52:22.918919 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:52:22.918931 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:52:22.918944 | orchestrator | 2026-01-05 00:52:22.918956 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-05 00:52:22.918967 | orchestrator | Monday 05 January 2026 00:49:37 +0000 (0:00:01.651) 0:00:23.956 ******** 2026-01-05 00:52:22.918986 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:52:22.918999 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:52:22.919011 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:52:22.919022 | orchestrator | 2026-01-05 00:52:22.919033 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:52:22.919046 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:01.435) 0:00:25.392 ******** 2026-01-05 00:52:22.919056 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:52:22.919068 | orchestrator | 2026-01-05 00:52:22.919078 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-01-05 00:52:22.919089 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:00.652) 0:00:26.045 ******** 2026-01-05 00:52:22.919111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.919145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.919160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.919172 | orchestrator | 2026-01-05 00:52:22.919184 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-01-05 00:52:22.919195 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:01.319) 0:00:27.364 ******** 2026-01-05 00:52:22.919207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919228 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.919241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919254 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:52:22.919325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919345 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:52:22.919359 | orchestrator | 2026-01-05 00:52:22.919372 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-01-05 00:52:22.919385 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:00.413) 0:00:27.777 ******** 2026-01-05 00:52:22.919398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919492 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.919505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919513 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:52:22.919526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919534 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:52:22.919541 | orchestrator | 2026-01-05 00:52:22.919549 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-01-05 00:52:22.919563 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:00.804) 0:00:28.582 ******** 2026-01-05 00:52:22.919572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.919580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.919598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:52:22.919607 | orchestrator | 2026-01-05 00:52:22.919614 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-01-05 00:52:22.919622 | orchestrator | Monday 05 January 2026 00:49:43 +0000 (0:00:01.205) 0:00:29.787 ******** 2026-01-05 00:52:22.919629 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:52:22.919637 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:52:22.919644 | orchestrator | } 2026-01-05 00:52:22.919657 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:52:22.919669 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:52:22.919680 | orchestrator | } 2026-01-05 00:52:22.919691 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:52:22.919701 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:52:22.919712 | orchestrator | } 2026-01-05 00:52:22.919722 | orchestrator | 2026-01-05 00:52:22.919733 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:52:22.919743 | orchestrator | Monday 05 January 2026 00:49:44 +0000 (0:00:00.345) 0:00:30.133 ******** 2026-01-05 00:52:22.919763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919775 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.919786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919821 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:52:22.919845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:52:22.919859 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:52:22.919872 | orchestrator | 2026-01-05 00:52:22.919998 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-05 00:52:22.920017 | orchestrator | Monday 05 January 2026 00:49:44 +0000 (0:00:00.907) 0:00:31.040 ******** 2026-01-05 00:52:22.920030 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:22.920043 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:22.920055 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:22.920066 | orchestrator | 2026-01-05 00:52:22.920078 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-05 00:52:22.920089 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:01.070) 0:00:32.110 ******** 2026-01-05 00:52:22.920100 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:22.920112 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:22.920123 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:22.920134 | orchestrator | 2026-01-05 00:52:22.920144 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-05 00:52:22.920156 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:09.773) 0:00:41.884 ******** 2026-01-05 00:52:22.920168 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:22.920181 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:22.920193 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:22.920205 | orchestrator | 2026-01-05 00:52:22.920218 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:52:22.920231 | orchestrator | 2026-01-05 00:52:22.920244 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:52:22.920267 | orchestrator | Monday 05 January 2026 00:49:56 +0000 (0:00:00.639) 0:00:42.523 ******** 2026-01-05 00:52:22.920278 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:22.920291 | orchestrator | 2026-01-05 00:52:22.920303 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:52:22.920328 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:00.770) 0:00:43.294 ******** 2026-01-05 00:52:22.920340 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:22.920352 | orchestrator | 2026-01-05 00:52:22.920365 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:52:22.920376 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:00.109) 0:00:43.403 ******** 2026-01-05 00:52:22.920387 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:22.920398 | orchestrator | 2026-01-05 00:52:22.920409 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:52:22.920420 | orchestrator | Monday 05 January 2026 00:50:04 +0000 (0:00:06.832) 0:00:50.236 ******** 2026-01-05 00:52:22.920432 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:22.920442 | orchestrator | 2026-01-05 00:52:22.920453 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:52:22.920465 | orchestrator | 2026-01-05 00:52:22.920477 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:52:22.920489 | orchestrator | Monday 05 January 2026 00:51:53 +0000 (0:01:49.193) 0:02:39.429 ******** 2026-01-05 00:52:22.920501 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:22.920514 | orchestrator | 2026-01-05 00:52:22.920525 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:52:22.920537 | orchestrator | Monday 05 January 2026 00:51:54 +0000 (0:00:00.805) 0:02:40.235 ******** 2026-01-05 00:52:22.920548 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:52:22.920560 | orchestrator | 2026-01-05 00:52:22.920572 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:52:22.920584 | orchestrator | Monday 05 January 2026 00:51:54 +0000 (0:00:00.126) 0:02:40.361 ******** 2026-01-05 00:52:22.920596 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:22.920609 | orchestrator | 2026-01-05 00:52:22.920621 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:52:22.920633 | orchestrator | Monday 05 January 2026 00:52:00 +0000 (0:00:06.671) 0:02:47.033 ******** 2026-01-05 00:52:22.920646 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:22.920659 | orchestrator | 2026-01-05 00:52:22.920672 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:52:22.920685 | orchestrator | 2026-01-05 00:52:22.920698 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:52:22.920710 | orchestrator | Monday 05 January 2026 00:52:07 +0000 (0:00:06.606) 0:02:53.639 ******** 2026-01-05 00:52:22.920722 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:22.920734 | orchestrator | 2026-01-05 00:52:22.920746 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:52:22.920759 | orchestrator | Monday 05 January 2026 00:52:08 +0000 (0:00:00.666) 0:02:54.305 ******** 2026-01-05 00:52:22.920772 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:52:22.920785 | orchestrator | 2026-01-05 00:52:22.920798 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:52:22.920812 | orchestrator | Monday 05 January 2026 00:52:08 +0000 (0:00:00.234) 0:02:54.539 ******** 2026-01-05 00:52:22.920826 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:22.920838 | orchestrator | 2026-01-05 00:52:22.920852 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:52:22.920865 | orchestrator | Monday 05 January 2026 00:52:10 +0000 (0:00:01.635) 0:02:56.175 ******** 2026-01-05 00:52:22.920877 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:22.920917 | orchestrator | 2026-01-05 00:52:22.920929 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-05 00:52:22.920942 | orchestrator | 2026-01-05 00:52:22.920954 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-05 00:52:22.920966 | orchestrator | Monday 05 January 2026 00:52:18 +0000 (0:00:08.406) 0:03:04.581 ******** 2026-01-05 00:52:22.920977 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:52:22.921001 | orchestrator | 2026-01-05 00:52:22.921013 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-05 00:52:22.921025 | orchestrator | Monday 05 January 2026 00:52:19 +0000 (0:00:00.596) 0:03:05.178 ******** 2026-01-05 00:52:22.921037 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:22.921049 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:22.921061 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:22.921071 | orchestrator | 2026-01-05 00:52:22.921082 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:52:22.921105 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-05 00:52:22.921121 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:52:22.921133 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:22.921146 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:22.921158 | orchestrator | 2026-01-05 00:52:22.921171 | orchestrator | 2026-01-05 00:52:22.921183 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:52:22.921195 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:03.172) 0:03:08.350 ******** 2026-01-05 00:52:22.921207 | orchestrator | =============================================================================== 2026-01-05 00:52:22.921219 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 124.21s 2026-01-05 00:52:22.921244 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.14s 2026-01-05 00:52:22.921257 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.77s 2026-01-05 00:52:22.921270 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.52s 2026-01-05 00:52:22.921282 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.17s 2026-01-05 00:52:22.921294 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.12s 2026-01-05 00:52:22.921307 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.54s 2026-01-05 00:52:22.921319 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.24s 2026-01-05 00:52:22.921331 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.98s 2026-01-05 00:52:22.921343 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.65s 2026-01-05 00:52:22.921356 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.54s 2026-01-05 00:52:22.921368 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.47s 2026-01-05 00:52:22.921380 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.44s 2026-01-05 00:52:22.921391 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.32s 2026-01-05 00:52:22.921404 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.21s 2026-01-05 00:52:22.921416 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.19s 2026-01-05 00:52:22.921428 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.07s 2026-01-05 00:52:22.921440 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.94s 2026-01-05 00:52:22.921451 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.91s 2026-01-05 00:52:22.921462 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.84s 2026-01-05 00:52:22.921475 | orchestrator | 2026-01-05 00:52:22 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:22.921499 | orchestrator | 2026-01-05 00:52:22 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:22.921512 | orchestrator | 2026-01-05 00:52:22 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:22.921524 | orchestrator | 2026-01-05 00:52:22 | INFO  | Task 38507440-603d-4285-9acb-bcc344de3937 is in state SUCCESS 2026-01-05 00:52:22.921536 | orchestrator | 2026-01-05 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:25.923747 | orchestrator | 2026-01-05 00:52:25 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:25.923847 | orchestrator | 2026-01-05 00:52:25 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:25.924614 | orchestrator | 2026-01-05 00:52:25 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:25.924687 | orchestrator | 2026-01-05 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:28.954627 | orchestrator | 2026-01-05 00:52:28 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:28.954812 | orchestrator | 2026-01-05 00:52:28 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:28.956030 | orchestrator | 2026-01-05 00:52:28 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:28.956079 | orchestrator | 2026-01-05 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:31.997562 | orchestrator | 2026-01-05 00:52:31 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:32.001436 | orchestrator | 2026-01-05 00:52:32 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:32.003615 | orchestrator | 2026-01-05 00:52:32 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:32.003732 | orchestrator | 2026-01-05 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:35.047130 | orchestrator | 2026-01-05 00:52:35 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:35.052818 | orchestrator | 2026-01-05 00:52:35 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:35.054618 | orchestrator | 2026-01-05 00:52:35 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:35.054842 | orchestrator | 2026-01-05 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:38.091184 | orchestrator | 2026-01-05 00:52:38 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:38.093490 | orchestrator | 2026-01-05 00:52:38 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:38.096077 | orchestrator | 2026-01-05 00:52:38 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:38.096132 | orchestrator | 2026-01-05 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:41.138349 | orchestrator | 2026-01-05 00:52:41 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:41.138608 | orchestrator | 2026-01-05 00:52:41 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:41.140692 | orchestrator | 2026-01-05 00:52:41 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:41.140759 | orchestrator | 2026-01-05 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:44.186369 | orchestrator | 2026-01-05 00:52:44 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:44.186695 | orchestrator | 2026-01-05 00:52:44 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:44.187950 | orchestrator | 2026-01-05 00:52:44 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:44.188021 | orchestrator | 2026-01-05 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:47.242879 | orchestrator | 2026-01-05 00:52:47 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:47.243517 | orchestrator | 2026-01-05 00:52:47 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:47.244611 | orchestrator | 2026-01-05 00:52:47 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:47.244643 | orchestrator | 2026-01-05 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:50.284161 | orchestrator | 2026-01-05 00:52:50 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:50.284635 | orchestrator | 2026-01-05 00:52:50 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:50.285522 | orchestrator | 2026-01-05 00:52:50 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:50.285546 | orchestrator | 2026-01-05 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:53.315127 | orchestrator | 2026-01-05 00:52:53 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:53.315582 | orchestrator | 2026-01-05 00:52:53 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:53.317316 | orchestrator | 2026-01-05 00:52:53 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:53.317366 | orchestrator | 2026-01-05 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:56.343416 | orchestrator | 2026-01-05 00:52:56 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:56.343876 | orchestrator | 2026-01-05 00:52:56 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:56.344579 | orchestrator | 2026-01-05 00:52:56 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:56.344756 | orchestrator | 2026-01-05 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:59.386070 | orchestrator | 2026-01-05 00:52:59 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:52:59.387318 | orchestrator | 2026-01-05 00:52:59 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:52:59.388043 | orchestrator | 2026-01-05 00:52:59 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:52:59.388058 | orchestrator | 2026-01-05 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:02.424077 | orchestrator | 2026-01-05 00:53:02 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:02.425003 | orchestrator | 2026-01-05 00:53:02 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:02.425972 | orchestrator | 2026-01-05 00:53:02 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:02.426140 | orchestrator | 2026-01-05 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:05.476540 | orchestrator | 2026-01-05 00:53:05 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:05.477196 | orchestrator | 2026-01-05 00:53:05 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:05.478128 | orchestrator | 2026-01-05 00:53:05 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:05.478231 | orchestrator | 2026-01-05 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:08.551319 | orchestrator | 2026-01-05 00:53:08 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:08.551623 | orchestrator | 2026-01-05 00:53:08 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:08.552572 | orchestrator | 2026-01-05 00:53:08 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:08.552606 | orchestrator | 2026-01-05 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:11.598835 | orchestrator | 2026-01-05 00:53:11 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:11.603908 | orchestrator | 2026-01-05 00:53:11 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:11.606646 | orchestrator | 2026-01-05 00:53:11 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:11.606873 | orchestrator | 2026-01-05 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:14.649222 | orchestrator | 2026-01-05 00:53:14 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:14.649939 | orchestrator | 2026-01-05 00:53:14 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:14.650809 | orchestrator | 2026-01-05 00:53:14 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:14.650866 | orchestrator | 2026-01-05 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:17.694502 | orchestrator | 2026-01-05 00:53:17 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:17.697091 | orchestrator | 2026-01-05 00:53:17 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:17.699557 | orchestrator | 2026-01-05 00:53:17 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:17.699612 | orchestrator | 2026-01-05 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:20.747946 | orchestrator | 2026-01-05 00:53:20 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:20.749095 | orchestrator | 2026-01-05 00:53:20 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:20.751166 | orchestrator | 2026-01-05 00:53:20 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:20.751486 | orchestrator | 2026-01-05 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:23.785640 | orchestrator | 2026-01-05 00:53:23 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:23.785950 | orchestrator | 2026-01-05 00:53:23 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:23.786725 | orchestrator | 2026-01-05 00:53:23 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:23.786813 | orchestrator | 2026-01-05 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:26.827370 | orchestrator | 2026-01-05 00:53:26 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:26.827512 | orchestrator | 2026-01-05 00:53:26 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:26.828414 | orchestrator | 2026-01-05 00:53:26 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:26.828452 | orchestrator | 2026-01-05 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:29.866283 | orchestrator | 2026-01-05 00:53:29 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:29.867049 | orchestrator | 2026-01-05 00:53:29 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:29.868258 | orchestrator | 2026-01-05 00:53:29 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:29.868298 | orchestrator | 2026-01-05 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:32.910164 | orchestrator | 2026-01-05 00:53:32 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:32.912649 | orchestrator | 2026-01-05 00:53:32 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:32.915303 | orchestrator | 2026-01-05 00:53:32 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:32.915385 | orchestrator | 2026-01-05 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:35.962168 | orchestrator | 2026-01-05 00:53:35 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:35.963132 | orchestrator | 2026-01-05 00:53:35 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:35.964763 | orchestrator | 2026-01-05 00:53:35 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:35.964808 | orchestrator | 2026-01-05 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:39.000839 | orchestrator | 2026-01-05 00:53:39 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:39.001461 | orchestrator | 2026-01-05 00:53:39 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:39.002523 | orchestrator | 2026-01-05 00:53:39 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:39.002589 | orchestrator | 2026-01-05 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:42.034011 | orchestrator | 2026-01-05 00:53:42 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:42.034441 | orchestrator | 2026-01-05 00:53:42 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:42.034797 | orchestrator | 2026-01-05 00:53:42 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:42.034933 | orchestrator | 2026-01-05 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:45.064195 | orchestrator | 2026-01-05 00:53:45 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:45.067104 | orchestrator | 2026-01-05 00:53:45 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:45.068973 | orchestrator | 2026-01-05 00:53:45 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:45.069002 | orchestrator | 2026-01-05 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:48.114448 | orchestrator | 2026-01-05 00:53:48 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:48.116180 | orchestrator | 2026-01-05 00:53:48 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:48.119094 | orchestrator | 2026-01-05 00:53:48 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:48.119205 | orchestrator | 2026-01-05 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:51.163803 | orchestrator | 2026-01-05 00:53:51 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:51.166719 | orchestrator | 2026-01-05 00:53:51 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:51.167761 | orchestrator | 2026-01-05 00:53:51 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:51.168021 | orchestrator | 2026-01-05 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:54.208642 | orchestrator | 2026-01-05 00:53:54 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:54.211370 | orchestrator | 2026-01-05 00:53:54 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:54.213850 | orchestrator | 2026-01-05 00:53:54 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:54.213935 | orchestrator | 2026-01-05 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:57.246808 | orchestrator | 2026-01-05 00:53:57 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:53:57.247024 | orchestrator | 2026-01-05 00:53:57 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:53:57.247137 | orchestrator | 2026-01-05 00:53:57 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state STARTED 2026-01-05 00:53:57.247153 | orchestrator | 2026-01-05 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:00.298522 | orchestrator | 2026-01-05 00:54:00 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:00.300002 | orchestrator | 2026-01-05 00:54:00 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:00.303093 | orchestrator | 2026-01-05 00:54:00 | INFO  | Task 75202ac4-a87a-4e97-94e7-4c58640de682 is in state SUCCESS 2026-01-05 00:54:00.303458 | orchestrator | 2026-01-05 00:54:00.305365 | orchestrator | 2026-01-05 00:54:00.305426 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:54:00.305526 | orchestrator | 2026-01-05 00:54:00.305543 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:54:00.305560 | orchestrator | Monday 05 January 2026 00:49:59 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-01-05 00:54:00.305577 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.305594 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.305611 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.305627 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:54:00.305644 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:54:00.305705 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:54:00.305723 | orchestrator | 2026-01-05 00:54:00.305738 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:54:00.305754 | orchestrator | Monday 05 January 2026 00:50:00 +0000 (0:00:01.034) 0:00:01.300 ******** 2026-01-05 00:54:00.305770 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-05 00:54:00.305786 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-05 00:54:00.305801 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-05 00:54:00.305818 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-05 00:54:00.305829 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-05 00:54:00.305940 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-05 00:54:00.305951 | orchestrator | 2026-01-05 00:54:00.305964 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-05 00:54:00.305975 | orchestrator | 2026-01-05 00:54:00.306131 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-05 00:54:00.306180 | orchestrator | Monday 05 January 2026 00:50:02 +0000 (0:00:01.411) 0:00:02.712 ******** 2026-01-05 00:54:00.306193 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:54:00.306206 | orchestrator | 2026-01-05 00:54:00.306217 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-05 00:54:00.306228 | orchestrator | Monday 05 January 2026 00:50:03 +0000 (0:00:01.078) 0:00:03.791 ******** 2026-01-05 00:54:00.306242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306342 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306351 | orchestrator | 2026-01-05 00:54:00.306376 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-05 00:54:00.306386 | orchestrator | Monday 05 January 2026 00:50:05 +0000 (0:00:01.850) 0:00:05.642 ******** 2026-01-05 00:54:00.306419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306483 | orchestrator | 2026-01-05 00:54:00.306492 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-05 00:54:00.306500 | orchestrator | Monday 05 January 2026 00:50:08 +0000 (0:00:02.852) 0:00:08.494 ******** 2026-01-05 00:54:00.306515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306574 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306644 | orchestrator | 2026-01-05 00:54:00.306655 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-05 00:54:00.306664 | orchestrator | Monday 05 January 2026 00:50:09 +0000 (0:00:01.835) 0:00:10.330 ******** 2026-01-05 00:54:00.306701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306805 | orchestrator | 2026-01-05 00:54:00.306830 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-01-05 00:54:00.306845 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:01.520) 0:00:11.850 ******** 2026-01-05 00:54:00.306860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.306953 | orchestrator | 2026-01-05 00:54:00.306968 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-01-05 00:54:00.306984 | orchestrator | Monday 05 January 2026 00:50:13 +0000 (0:00:01.631) 0:00:13.481 ******** 2026-01-05 00:54:00.306994 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:54:00.307010 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.307019 | orchestrator | } 2026-01-05 00:54:00.307029 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:54:00.307037 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.307046 | orchestrator | } 2026-01-05 00:54:00.307054 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:54:00.307063 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.307072 | orchestrator | } 2026-01-05 00:54:00.307080 | orchestrator | changed: [testbed-node-3] => { 2026-01-05 00:54:00.307089 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.307097 | orchestrator | } 2026-01-05 00:54:00.307106 | orchestrator | changed: [testbed-node-4] => { 2026-01-05 00:54:00.307131 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.307140 | orchestrator | } 2026-01-05 00:54:00.307148 | orchestrator | changed: [testbed-node-5] => { 2026-01-05 00:54:00.307163 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.307178 | orchestrator | } 2026-01-05 00:54:00.307195 | orchestrator | 2026-01-05 00:54:00.307217 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:54:00.307231 | orchestrator | Monday 05 January 2026 00:50:13 +0000 (0:00:00.891) 0:00:14.373 ******** 2026-01-05 00:54:00.307245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.307272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.307287 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.307299 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.307312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.307327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.307342 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.307356 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:00.307370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.307386 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:00.307401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.307415 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:00.307430 | orchestrator | 2026-01-05 00:54:00.307444 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-05 00:54:00.307459 | orchestrator | Monday 05 January 2026 00:50:15 +0000 (0:00:01.496) 0:00:15.870 ******** 2026-01-05 00:54:00.307474 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.307489 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.307517 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.307532 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:00.307546 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:00.307560 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:00.307575 | orchestrator | 2026-01-05 00:54:00.307590 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-05 00:54:00.307605 | orchestrator | Monday 05 January 2026 00:50:18 +0000 (0:00:02.796) 0:00:18.666 ******** 2026-01-05 00:54:00.307628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-05 00:54:00.307639 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-05 00:54:00.307648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:54:00.307657 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-05 00:54:00.307741 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-05 00:54:00.307763 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-05 00:54:00.307776 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-05 00:54:00.307791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:54:00.307807 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-05 00:54:00.307825 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:54:00.307841 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:54:00.307856 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:54:00.307875 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:54:00.307888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-05 00:54:00.307902 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-05 00:54:00.307916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:54:00.307933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-05 00:54:00.307947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-05 00:54:00.307961 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-05 00:54:00.307975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:54:00.307989 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:54:00.308004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:54:00.308019 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:54:00.308033 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:54:00.308049 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:54:00.308058 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:54:00.308079 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:54:00.308088 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:54:00.308096 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:54:00.308105 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:54:00.308114 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:54:00.308123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:54:00.308132 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:54:00.308141 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:54:00.308150 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:54:00.308159 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:54:00.308168 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:54:00.308176 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:54:00.308192 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-05 00:54:00.308202 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:54:00.308211 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:54:00.308220 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:54:00.308229 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:54:00.308239 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-05 00:54:00.308248 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:54:00.308257 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-05 00:54:00.308266 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-05 00:54:00.308282 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-05 00:54:00.308292 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-05 00:54:00.308301 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:54:00.308309 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:54:00.308318 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:54:00.308327 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:54:00.308336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:54:00.308351 | orchestrator | 2026-01-05 00:54:00.308360 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:54:00.308369 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:23.545) 0:00:42.212 ******** 2026-01-05 00:54:00.308378 | orchestrator | 2026-01-05 00:54:00.308387 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:54:00.308396 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:00.070) 0:00:42.282 ******** 2026-01-05 00:54:00.308404 | orchestrator | 2026-01-05 00:54:00.308413 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:54:00.308422 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:00.062) 0:00:42.345 ******** 2026-01-05 00:54:00.308430 | orchestrator | 2026-01-05 00:54:00.308439 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:54:00.308448 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:00.066) 0:00:42.411 ******** 2026-01-05 00:54:00.308457 | orchestrator | 2026-01-05 00:54:00.308466 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:54:00.308474 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:00.103) 0:00:42.514 ******** 2026-01-05 00:54:00.308484 | orchestrator | 2026-01-05 00:54:00.308493 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:54:00.308502 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:00.082) 0:00:42.596 ******** 2026-01-05 00:54:00.308510 | orchestrator | 2026-01-05 00:54:00.308519 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-05 00:54:00.308528 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:00.067) 0:00:42.664 ******** 2026-01-05 00:54:00.308536 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.308546 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:54:00.308555 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.308564 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.308573 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:54:00.308582 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:54:00.308591 | orchestrator | 2026-01-05 00:54:00.308600 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-05 00:54:00.308609 | orchestrator | Monday 05 January 2026 00:50:44 +0000 (0:00:02.098) 0:00:44.762 ******** 2026-01-05 00:54:00.308618 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.308627 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:00.308637 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:00.308646 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.308654 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.308663 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:00.308829 | orchestrator | 2026-01-05 00:54:00.308859 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-05 00:54:00.308869 | orchestrator | 2026-01-05 00:54:00.308878 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:54:00.308886 | orchestrator | Monday 05 January 2026 00:50:52 +0000 (0:00:08.617) 0:00:53.379 ******** 2026-01-05 00:54:00.308903 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:00.308913 | orchestrator | 2026-01-05 00:54:00.308922 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:54:00.308931 | orchestrator | Monday 05 January 2026 00:50:53 +0000 (0:00:00.603) 0:00:53.983 ******** 2026-01-05 00:54:00.308940 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:00.308949 | orchestrator | 2026-01-05 00:54:00.308958 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-05 00:54:00.308968 | orchestrator | Monday 05 January 2026 00:50:54 +0000 (0:00:00.951) 0:00:54.935 ******** 2026-01-05 00:54:00.308977 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.308996 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.309005 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.309013 | orchestrator | 2026-01-05 00:54:00.309022 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-05 00:54:00.309031 | orchestrator | Monday 05 January 2026 00:50:55 +0000 (0:00:01.029) 0:00:55.964 ******** 2026-01-05 00:54:00.309040 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.309049 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.309058 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.309067 | orchestrator | 2026-01-05 00:54:00.309076 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-05 00:54:00.309085 | orchestrator | Monday 05 January 2026 00:50:55 +0000 (0:00:00.430) 0:00:56.395 ******** 2026-01-05 00:54:00.309094 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.309102 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.309111 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.309120 | orchestrator | 2026-01-05 00:54:00.309129 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-05 00:54:00.309151 | orchestrator | Monday 05 January 2026 00:50:56 +0000 (0:00:00.645) 0:00:57.041 ******** 2026-01-05 00:54:00.309160 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.309169 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.309178 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.309187 | orchestrator | 2026-01-05 00:54:00.309196 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-05 00:54:00.309206 | orchestrator | Monday 05 January 2026 00:50:57 +0000 (0:00:00.435) 0:00:57.477 ******** 2026-01-05 00:54:00.309215 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.309224 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.309233 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.309242 | orchestrator | 2026-01-05 00:54:00.309251 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-05 00:54:00.309259 | orchestrator | Monday 05 January 2026 00:50:57 +0000 (0:00:00.388) 0:00:57.865 ******** 2026-01-05 00:54:00.309268 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309277 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309286 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309295 | orchestrator | 2026-01-05 00:54:00.309305 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-05 00:54:00.309314 | orchestrator | Monday 05 January 2026 00:50:57 +0000 (0:00:00.378) 0:00:58.244 ******** 2026-01-05 00:54:00.309322 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309331 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309340 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309349 | orchestrator | 2026-01-05 00:54:00.309358 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-05 00:54:00.309367 | orchestrator | Monday 05 January 2026 00:50:58 +0000 (0:00:00.538) 0:00:58.783 ******** 2026-01-05 00:54:00.309377 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309387 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309395 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309402 | orchestrator | 2026-01-05 00:54:00.309410 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-05 00:54:00.309418 | orchestrator | Monday 05 January 2026 00:50:58 +0000 (0:00:00.325) 0:00:59.108 ******** 2026-01-05 00:54:00.309426 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309434 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309442 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309450 | orchestrator | 2026-01-05 00:54:00.309458 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-05 00:54:00.309466 | orchestrator | Monday 05 January 2026 00:50:59 +0000 (0:00:00.356) 0:00:59.465 ******** 2026-01-05 00:54:00.309474 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309482 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309490 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309547 | orchestrator | 2026-01-05 00:54:00.309560 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-05 00:54:00.309574 | orchestrator | Monday 05 January 2026 00:50:59 +0000 (0:00:00.344) 0:00:59.810 ******** 2026-01-05 00:54:00.309587 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309601 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309615 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309630 | orchestrator | 2026-01-05 00:54:00.309645 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-05 00:54:00.309658 | orchestrator | Monday 05 January 2026 00:50:59 +0000 (0:00:00.588) 0:01:00.398 ******** 2026-01-05 00:54:00.309754 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309765 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309773 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309781 | orchestrator | 2026-01-05 00:54:00.309790 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-05 00:54:00.309798 | orchestrator | Monday 05 January 2026 00:51:00 +0000 (0:00:00.392) 0:01:00.790 ******** 2026-01-05 00:54:00.309807 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309815 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309823 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309831 | orchestrator | 2026-01-05 00:54:00.309839 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-05 00:54:00.309847 | orchestrator | Monday 05 January 2026 00:51:00 +0000 (0:00:00.471) 0:01:01.262 ******** 2026-01-05 00:54:00.309855 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309863 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309872 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309880 | orchestrator | 2026-01-05 00:54:00.309888 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-05 00:54:00.309897 | orchestrator | Monday 05 January 2026 00:51:01 +0000 (0:00:00.635) 0:01:01.897 ******** 2026-01-05 00:54:00.309905 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309913 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309921 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309930 | orchestrator | 2026-01-05 00:54:00.309938 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-05 00:54:00.309947 | orchestrator | Monday 05 January 2026 00:51:01 +0000 (0:00:00.433) 0:01:02.331 ******** 2026-01-05 00:54:00.309955 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.309963 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.309971 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.309979 | orchestrator | 2026-01-05 00:54:00.309987 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-05 00:54:00.309997 | orchestrator | Monday 05 January 2026 00:51:02 +0000 (0:00:00.851) 0:01:03.182 ******** 2026-01-05 00:54:00.310012 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.310082 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.310095 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.310108 | orchestrator | 2026-01-05 00:54:00.310121 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:54:00.310135 | orchestrator | Monday 05 January 2026 00:51:03 +0000 (0:00:00.406) 0:01:03.589 ******** 2026-01-05 00:54:00.310149 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:00.310163 | orchestrator | 2026-01-05 00:54:00.310183 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-05 00:54:00.310193 | orchestrator | Monday 05 January 2026 00:51:04 +0000 (0:00:01.060) 0:01:04.650 ******** 2026-01-05 00:54:00.310200 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.310209 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.310216 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.310224 | orchestrator | 2026-01-05 00:54:00.310232 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-05 00:54:00.310250 | orchestrator | Monday 05 January 2026 00:51:05 +0000 (0:00:01.556) 0:01:06.206 ******** 2026-01-05 00:54:00.310258 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.310266 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.310274 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.310282 | orchestrator | 2026-01-05 00:54:00.310291 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-05 00:54:00.310333 | orchestrator | Monday 05 January 2026 00:51:06 +0000 (0:00:00.776) 0:01:06.982 ******** 2026-01-05 00:54:00.310343 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.310351 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.310359 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.310367 | orchestrator | 2026-01-05 00:54:00.310375 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-05 00:54:00.310384 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:00.500) 0:01:07.483 ******** 2026-01-05 00:54:00.310392 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.310400 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.310408 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.310416 | orchestrator | 2026-01-05 00:54:00.310424 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-05 00:54:00.310432 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:00.575) 0:01:08.058 ******** 2026-01-05 00:54:00.310441 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.310449 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.310456 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.310464 | orchestrator | 2026-01-05 00:54:00.310472 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-05 00:54:00.310480 | orchestrator | Monday 05 January 2026 00:51:08 +0000 (0:00:00.770) 0:01:08.829 ******** 2026-01-05 00:54:00.310488 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.310496 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.310504 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.310512 | orchestrator | 2026-01-05 00:54:00.310520 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-05 00:54:00.310528 | orchestrator | Monday 05 January 2026 00:51:08 +0000 (0:00:00.344) 0:01:09.173 ******** 2026-01-05 00:54:00.310536 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.310543 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.310551 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.310559 | orchestrator | 2026-01-05 00:54:00.310567 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-05 00:54:00.310574 | orchestrator | Monday 05 January 2026 00:51:09 +0000 (0:00:00.413) 0:01:09.586 ******** 2026-01-05 00:54:00.310582 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.310590 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.310599 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.310607 | orchestrator | 2026-01-05 00:54:00.310615 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 00:54:00.310623 | orchestrator | Monday 05 January 2026 00:51:09 +0000 (0:00:00.408) 0:01:09.995 ******** 2026-01-05 00:54:00.310633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.310780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.310813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.310837 | orchestrator | 2026-01-05 00:54:00.310846 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 00:54:00.310854 | orchestrator | Monday 05 January 2026 00:51:12 +0000 (0:00:03.336) 0:01:13.332 ******** 2026-01-05 00:54:00.310863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.310953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.310970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.310978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.310992 | orchestrator | 2026-01-05 00:54:00.311001 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-05 00:54:00.311009 | orchestrator | Monday 05 January 2026 00:51:19 +0000 (0:00:06.593) 0:01:19.925 ******** 2026-01-05 00:54:00.311018 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-05 00:54:00.311027 | orchestrator | 2026-01-05 00:54:00.311035 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-05 00:54:00.311043 | orchestrator | Monday 05 January 2026 00:51:20 +0000 (0:00:00.607) 0:01:20.533 ******** 2026-01-05 00:54:00.311051 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.311060 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.311068 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.311076 | orchestrator | 2026-01-05 00:54:00.311089 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-05 00:54:00.311098 | orchestrator | Monday 05 January 2026 00:51:21 +0000 (0:00:00.972) 0:01:21.506 ******** 2026-01-05 00:54:00.311106 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.311114 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.311122 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.311130 | orchestrator | 2026-01-05 00:54:00.311138 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-05 00:54:00.311146 | orchestrator | Monday 05 January 2026 00:51:22 +0000 (0:00:01.575) 0:01:23.081 ******** 2026-01-05 00:54:00.311154 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.311162 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.311170 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.311177 | orchestrator | 2026-01-05 00:54:00.311185 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-05 00:54:00.311193 | orchestrator | Monday 05 January 2026 00:51:24 +0000 (0:00:01.558) 0:01:24.640 ******** 2026-01-05 00:54:00.311208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311338 | orchestrator | 2026-01-05 00:54:00.311346 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-05 00:54:00.311354 | orchestrator | Monday 05 January 2026 00:51:28 +0000 (0:00:03.970) 0:01:28.611 ******** 2026-01-05 00:54:00.311363 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:54:00.311370 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.311379 | orchestrator | } 2026-01-05 00:54:00.311387 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:54:00.311395 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.311402 | orchestrator | } 2026-01-05 00:54:00.311410 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:54:00.311418 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.311426 | orchestrator | } 2026-01-05 00:54:00.311434 | orchestrator | 2026-01-05 00:54:00.311442 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:54:00.311450 | orchestrator | Monday 05 January 2026 00:51:28 +0000 (0:00:00.365) 0:01:28.976 ******** 2026-01-05 00:54:00.311458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.311551 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.311559 | orchestrator | 2026-01-05 00:54:00.311567 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-05 00:54:00.311575 | orchestrator | Monday 05 January 2026 00:51:31 +0000 (0:00:02.735) 0:01:31.712 ******** 2026-01-05 00:54:00.311584 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-05 00:54:00.311592 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-05 00:54:00.311600 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-05 00:54:00.311608 | orchestrator | 2026-01-05 00:54:00.311616 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-05 00:54:00.311624 | orchestrator | Monday 05 January 2026 00:51:32 +0000 (0:00:00.954) 0:01:32.666 ******** 2026-01-05 00:54:00.311632 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:54:00.311640 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.311648 | orchestrator | } 2026-01-05 00:54:00.311656 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:54:00.311664 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.311688 | orchestrator | } 2026-01-05 00:54:00.311696 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:54:00.311705 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.311718 | orchestrator | } 2026-01-05 00:54:00.311733 | orchestrator | 2026-01-05 00:54:00.311741 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:54:00.311749 | orchestrator | Monday 05 January 2026 00:51:33 +0000 (0:00:00.853) 0:01:33.520 ******** 2026-01-05 00:54:00.311757 | orchestrator | 2026-01-05 00:54:00.311765 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:54:00.311773 | orchestrator | Monday 05 January 2026 00:51:33 +0000 (0:00:00.121) 0:01:33.641 ******** 2026-01-05 00:54:00.311781 | orchestrator | 2026-01-05 00:54:00.311789 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:54:00.311797 | orchestrator | Monday 05 January 2026 00:51:33 +0000 (0:00:00.070) 0:01:33.712 ******** 2026-01-05 00:54:00.311805 | orchestrator | 2026-01-05 00:54:00.311813 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 00:54:00.311821 | orchestrator | Monday 05 January 2026 00:51:33 +0000 (0:00:00.071) 0:01:33.783 ******** 2026-01-05 00:54:00.311829 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.311837 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.311845 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.311853 | orchestrator | 2026-01-05 00:54:00.311861 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 00:54:00.311869 | orchestrator | Monday 05 January 2026 00:51:50 +0000 (0:00:16.950) 0:01:50.734 ******** 2026-01-05 00:54:00.311877 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.311885 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.311893 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.311900 | orchestrator | 2026-01-05 00:54:00.311908 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-05 00:54:00.311916 | orchestrator | Monday 05 January 2026 00:52:05 +0000 (0:00:15.630) 0:02:06.364 ******** 2026-01-05 00:54:00.311924 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-05 00:54:00.311933 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-05 00:54:00.311941 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-05 00:54:00.311948 | orchestrator | 2026-01-05 00:54:00.311956 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-05 00:54:00.311964 | orchestrator | Monday 05 January 2026 00:52:20 +0000 (0:00:14.221) 0:02:20.585 ******** 2026-01-05 00:54:00.311972 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.311980 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.311988 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.311996 | orchestrator | 2026-01-05 00:54:00.312003 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 00:54:00.312011 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:16.085) 0:02:36.670 ******** 2026-01-05 00:54:00.312019 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.312027 | orchestrator | 2026-01-05 00:54:00.312035 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 00:54:00.312044 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:00.101) 0:02:36.771 ******** 2026-01-05 00:54:00.312051 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.312060 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.312068 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.312076 | orchestrator | 2026-01-05 00:54:00.312083 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 00:54:00.312091 | orchestrator | Monday 05 January 2026 00:52:37 +0000 (0:00:00.830) 0:02:37.602 ******** 2026-01-05 00:54:00.312099 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.312107 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.312115 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.312123 | orchestrator | 2026-01-05 00:54:00.312130 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 00:54:00.312138 | orchestrator | Monday 05 January 2026 00:52:37 +0000 (0:00:00.687) 0:02:38.289 ******** 2026-01-05 00:54:00.312146 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.312159 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.312167 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.312174 | orchestrator | 2026-01-05 00:54:00.312182 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 00:54:00.312190 | orchestrator | Monday 05 January 2026 00:52:39 +0000 (0:00:01.195) 0:02:39.485 ******** 2026-01-05 00:54:00.312198 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.312206 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.312214 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.312222 | orchestrator | 2026-01-05 00:54:00.312234 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 00:54:00.312242 | orchestrator | Monday 05 January 2026 00:52:39 +0000 (0:00:00.739) 0:02:40.225 ******** 2026-01-05 00:54:00.312250 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.312257 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.312266 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.312273 | orchestrator | 2026-01-05 00:54:00.312281 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 00:54:00.312289 | orchestrator | Monday 05 January 2026 00:52:40 +0000 (0:00:01.134) 0:02:41.360 ******** 2026-01-05 00:54:00.312297 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.312305 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.312313 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.312321 | orchestrator | 2026-01-05 00:54:00.312329 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-05 00:54:00.312337 | orchestrator | Monday 05 January 2026 00:52:42 +0000 (0:00:01.262) 0:02:42.623 ******** 2026-01-05 00:54:00.312344 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-05 00:54:00.312352 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-05 00:54:00.312360 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-05 00:54:00.312368 | orchestrator | 2026-01-05 00:54:00.312376 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-05 00:54:00.312384 | orchestrator | Monday 05 January 2026 00:52:43 +0000 (0:00:01.367) 0:02:43.990 ******** 2026-01-05 00:54:00.312392 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.312399 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.312408 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.312415 | orchestrator | 2026-01-05 00:54:00.312423 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 00:54:00.312869 | orchestrator | Monday 05 January 2026 00:52:43 +0000 (0:00:00.373) 0:02:44.364 ******** 2026-01-05 00:54:00.312902 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.312912 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.312922 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.312940 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.312949 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.312965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.312974 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.312991 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313006 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.313033 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.313074 | orchestrator | 2026-01-05 00:54:00.313087 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 00:54:00.313099 | orchestrator | Monday 05 January 2026 00:52:48 +0000 (0:00:04.176) 0:02:48.540 ******** 2026-01-05 00:54:00.313112 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313133 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313147 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313164 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313201 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.313232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.313270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.313309 | orchestrator | 2026-01-05 00:54:00.313325 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-05 00:54:00.313340 | orchestrator | Monday 05 January 2026 00:52:54 +0000 (0:00:06.000) 0:02:54.541 ******** 2026-01-05 00:54:00.313358 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-05 00:54:00.313368 | orchestrator | 2026-01-05 00:54:00.313377 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-05 00:54:00.313385 | orchestrator | Monday 05 January 2026 00:52:54 +0000 (0:00:00.727) 0:02:55.269 ******** 2026-01-05 00:54:00.313394 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.313452 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.313462 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.313472 | orchestrator | 2026-01-05 00:54:00.313483 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-05 00:54:00.313494 | orchestrator | Monday 05 January 2026 00:52:55 +0000 (0:00:00.616) 0:02:55.885 ******** 2026-01-05 00:54:00.313504 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.313514 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.313525 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.313535 | orchestrator | 2026-01-05 00:54:00.313545 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-05 00:54:00.313555 | orchestrator | Monday 05 January 2026 00:52:57 +0000 (0:00:01.732) 0:02:57.618 ******** 2026-01-05 00:54:00.313566 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.313577 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.313586 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.313597 | orchestrator | 2026-01-05 00:54:00.313608 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-05 00:54:00.313618 | orchestrator | Monday 05 January 2026 00:52:58 +0000 (0:00:01.606) 0:02:59.224 ******** 2026-01-05 00:54:00.313629 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313640 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313650 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313666 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313914 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.313934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.313952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.313968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.313977 | orchestrator | 2026-01-05 00:54:00.313987 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-05 00:54:00.314012 | orchestrator | Monday 05 January 2026 00:53:04 +0000 (0:00:05.475) 0:03:04.699 ******** 2026-01-05 00:54:00.314081 | orchestrator | ok: [testbed-node-0] => { 2026-01-05 00:54:00.314091 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.314100 | orchestrator | } 2026-01-05 00:54:00.314109 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:54:00.314119 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.314136 | orchestrator | } 2026-01-05 00:54:00.314145 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:54:00.314154 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.314163 | orchestrator | } 2026-01-05 00:54:00.314172 | orchestrator | 2026-01-05 00:54:00.314181 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:54:00.314190 | orchestrator | Monday 05 January 2026 00:53:04 +0000 (0:00:00.454) 0:03:05.154 ******** 2026-01-05 00:54:00.314209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:00.314311 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-1, testbed-node-2, testbed-node-0 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:54:00.314320 | orchestrator | 2026-01-05 00:54:00.314328 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-05 00:54:00.314336 | orchestrator | Monday 05 January 2026 00:53:07 +0000 (0:00:02.928) 0:03:08.082 ******** 2026-01-05 00:54:00.314345 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-05 00:54:00.314353 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-05 00:54:00.314361 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-05 00:54:00.314369 | orchestrator | 2026-01-05 00:54:00.314377 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-05 00:54:00.314385 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:01.612) 0:03:09.695 ******** 2026-01-05 00:54:00.314393 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:54:00.314401 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.314408 | orchestrator | } 2026-01-05 00:54:00.314416 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:54:00.314424 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.314432 | orchestrator | } 2026-01-05 00:54:00.314440 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:54:00.314448 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:54:00.314456 | orchestrator | } 2026-01-05 00:54:00.314464 | orchestrator | 2026-01-05 00:54:00.314472 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:54:00.314480 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:00.567) 0:03:10.263 ******** 2026-01-05 00:54:00.314488 | orchestrator | 2026-01-05 00:54:00.314496 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:54:00.314504 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:00.071) 0:03:10.334 ******** 2026-01-05 00:54:00.314512 | orchestrator | 2026-01-05 00:54:00.314520 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:54:00.314528 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:00.067) 0:03:10.401 ******** 2026-01-05 00:54:00.314536 | orchestrator | 2026-01-05 00:54:00.314543 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 00:54:00.314558 | orchestrator | Monday 05 January 2026 00:53:10 +0000 (0:00:00.069) 0:03:10.471 ******** 2026-01-05 00:54:00.314566 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.314573 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.314581 | orchestrator | 2026-01-05 00:54:00.314589 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 00:54:00.314597 | orchestrator | Monday 05 January 2026 00:53:23 +0000 (0:00:13.267) 0:03:23.738 ******** 2026-01-05 00:54:00.314605 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:00.314613 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:00.314621 | orchestrator | 2026-01-05 00:54:00.314629 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-05 00:54:00.314637 | orchestrator | Monday 05 January 2026 00:53:36 +0000 (0:00:12.893) 0:03:36.632 ******** 2026-01-05 00:54:00.314645 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-05 00:54:00.314653 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-05 00:54:00.314665 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-05 00:54:00.314701 | orchestrator | 2026-01-05 00:54:00.314715 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 00:54:00.314725 | orchestrator | Monday 05 January 2026 00:53:49 +0000 (0:00:13.638) 0:03:50.270 ******** 2026-01-05 00:54:00.314732 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:00.314740 | orchestrator | 2026-01-05 00:54:00.314748 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 00:54:00.314756 | orchestrator | Monday 05 January 2026 00:53:49 +0000 (0:00:00.131) 0:03:50.402 ******** 2026-01-05 00:54:00.314764 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.314772 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.314780 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.314788 | orchestrator | 2026-01-05 00:54:00.314796 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 00:54:00.314804 | orchestrator | Monday 05 January 2026 00:53:50 +0000 (0:00:00.816) 0:03:51.219 ******** 2026-01-05 00:54:00.314812 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.314821 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.314834 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.314845 | orchestrator | 2026-01-05 00:54:00.314857 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 00:54:00.314869 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:01.047) 0:03:52.267 ******** 2026-01-05 00:54:00.314882 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.314894 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.314907 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.314919 | orchestrator | 2026-01-05 00:54:00.314933 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 00:54:00.314955 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:01.049) 0:03:53.316 ******** 2026-01-05 00:54:00.314968 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:00.314976 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:00.314984 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:00.314992 | orchestrator | 2026-01-05 00:54:00.314999 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 00:54:00.315007 | orchestrator | Monday 05 January 2026 00:53:53 +0000 (0:00:00.698) 0:03:54.014 ******** 2026-01-05 00:54:00.315015 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.315023 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.315031 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.315039 | orchestrator | 2026-01-05 00:54:00.315047 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 00:54:00.315055 | orchestrator | Monday 05 January 2026 00:53:54 +0000 (0:00:00.983) 0:03:54.998 ******** 2026-01-05 00:54:00.315063 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:00.315070 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:00.315078 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:00.315093 | orchestrator | 2026-01-05 00:54:00.315102 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-05 00:54:00.315110 | orchestrator | Monday 05 January 2026 00:53:55 +0000 (0:00:00.873) 0:03:55.872 ******** 2026-01-05 00:54:00.315117 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-05 00:54:00.315125 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-05 00:54:00.315133 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-05 00:54:00.315141 | orchestrator | 2026-01-05 00:54:00.315149 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:54:00.315157 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-05 00:54:00.315166 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-05 00:54:00.315174 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-05 00:54:00.315182 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:54:00.315190 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:54:00.315198 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:54:00.315206 | orchestrator | 2026-01-05 00:54:00.315214 | orchestrator | 2026-01-05 00:54:00.315222 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:54:00.315230 | orchestrator | Monday 05 January 2026 00:53:56 +0000 (0:00:01.509) 0:03:57.381 ******** 2026-01-05 00:54:00.315238 | orchestrator | =============================================================================== 2026-01-05 00:54:00.315246 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 30.22s 2026-01-05 00:54:00.315257 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 28.52s 2026-01-05 00:54:00.315269 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 27.86s 2026-01-05 00:54:00.315281 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.54s 2026-01-05 00:54:00.315295 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.09s 2026-01-05 00:54:00.315308 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.62s 2026-01-05 00:54:00.315320 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.59s 2026-01-05 00:54:00.315333 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.00s 2026-01-05 00:54:00.315353 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.48s 2026-01-05 00:54:00.315366 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.18s 2026-01-05 00:54:00.315379 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 3.97s 2026-01-05 00:54:00.315387 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.34s 2026-01-05 00:54:00.315395 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.93s 2026-01-05 00:54:00.315402 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.85s 2026-01-05 00:54:00.315410 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.80s 2026-01-05 00:54:00.315418 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.74s 2026-01-05 00:54:00.315429 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.10s 2026-01-05 00:54:00.315442 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.85s 2026-01-05 00:54:00.315469 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.84s 2026-01-05 00:54:00.315483 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.73s 2026-01-05 00:54:00.315492 | orchestrator | 2026-01-05 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:03.364363 | orchestrator | 2026-01-05 00:54:03 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:03.365507 | orchestrator | 2026-01-05 00:54:03 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:03.365594 | orchestrator | 2026-01-05 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:06.399223 | orchestrator | 2026-01-05 00:54:06 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:06.400500 | orchestrator | 2026-01-05 00:54:06 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:06.400685 | orchestrator | 2026-01-05 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:09.446289 | orchestrator | 2026-01-05 00:54:09 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:09.448074 | orchestrator | 2026-01-05 00:54:09 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:09.448462 | orchestrator | 2026-01-05 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:12.497310 | orchestrator | 2026-01-05 00:54:12 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:12.499217 | orchestrator | 2026-01-05 00:54:12 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:12.499357 | orchestrator | 2026-01-05 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:15.542970 | orchestrator | 2026-01-05 00:54:15 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:15.543031 | orchestrator | 2026-01-05 00:54:15 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:15.543039 | orchestrator | 2026-01-05 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:18.582176 | orchestrator | 2026-01-05 00:54:18 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:18.582421 | orchestrator | 2026-01-05 00:54:18 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:18.582577 | orchestrator | 2026-01-05 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:21.622140 | orchestrator | 2026-01-05 00:54:21 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:21.623312 | orchestrator | 2026-01-05 00:54:21 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:21.623375 | orchestrator | 2026-01-05 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:24.669931 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:24.671208 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:24.671269 | orchestrator | 2026-01-05 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:27.717851 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:27.717940 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:27.717951 | orchestrator | 2026-01-05 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:30.751953 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:30.752425 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:30.752453 | orchestrator | 2026-01-05 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:33.805318 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:33.805769 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:33.805807 | orchestrator | 2026-01-05 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:36.857274 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:36.860300 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:36.863150 | orchestrator | 2026-01-05 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:39.899737 | orchestrator | 2026-01-05 00:54:39 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:39.899863 | orchestrator | 2026-01-05 00:54:39 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:39.899879 | orchestrator | 2026-01-05 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:42.946712 | orchestrator | 2026-01-05 00:54:42 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:42.947290 | orchestrator | 2026-01-05 00:54:42 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:42.947321 | orchestrator | 2026-01-05 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:45.984936 | orchestrator | 2026-01-05 00:54:45 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:45.987748 | orchestrator | 2026-01-05 00:54:45 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:45.987848 | orchestrator | 2026-01-05 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:49.041337 | orchestrator | 2026-01-05 00:54:49 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:49.041604 | orchestrator | 2026-01-05 00:54:49 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:49.041896 | orchestrator | 2026-01-05 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:52.079001 | orchestrator | 2026-01-05 00:54:52 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:52.080680 | orchestrator | 2026-01-05 00:54:52 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:52.080736 | orchestrator | 2026-01-05 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:55.128449 | orchestrator | 2026-01-05 00:54:55 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:55.130720 | orchestrator | 2026-01-05 00:54:55 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:55.130895 | orchestrator | 2026-01-05 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:58.184111 | orchestrator | 2026-01-05 00:54:58 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:54:58.186509 | orchestrator | 2026-01-05 00:54:58 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:54:58.187416 | orchestrator | 2026-01-05 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:01.243883 | orchestrator | 2026-01-05 00:55:01 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:01.244316 | orchestrator | 2026-01-05 00:55:01 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:01.244416 | orchestrator | 2026-01-05 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:04.288447 | orchestrator | 2026-01-05 00:55:04 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:04.293076 | orchestrator | 2026-01-05 00:55:04 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:04.293196 | orchestrator | 2026-01-05 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:07.344769 | orchestrator | 2026-01-05 00:55:07 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:07.346268 | orchestrator | 2026-01-05 00:55:07 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:07.346779 | orchestrator | 2026-01-05 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:10.403020 | orchestrator | 2026-01-05 00:55:10 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:10.406810 | orchestrator | 2026-01-05 00:55:10 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:10.406883 | orchestrator | 2026-01-05 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:13.462350 | orchestrator | 2026-01-05 00:55:13 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:13.463855 | orchestrator | 2026-01-05 00:55:13 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:13.463912 | orchestrator | 2026-01-05 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:16.516341 | orchestrator | 2026-01-05 00:55:16 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:16.517441 | orchestrator | 2026-01-05 00:55:16 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:16.517931 | orchestrator | 2026-01-05 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:19.563338 | orchestrator | 2026-01-05 00:55:19 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:19.563936 | orchestrator | 2026-01-05 00:55:19 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:19.563999 | orchestrator | 2026-01-05 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:22.600952 | orchestrator | 2026-01-05 00:55:22 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:22.602564 | orchestrator | 2026-01-05 00:55:22 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:22.602616 | orchestrator | 2026-01-05 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:25.649071 | orchestrator | 2026-01-05 00:55:25 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:25.651426 | orchestrator | 2026-01-05 00:55:25 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:25.651479 | orchestrator | 2026-01-05 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:28.694879 | orchestrator | 2026-01-05 00:55:28 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:28.696481 | orchestrator | 2026-01-05 00:55:28 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:28.696575 | orchestrator | 2026-01-05 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:31.744187 | orchestrator | 2026-01-05 00:55:31 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:31.745699 | orchestrator | 2026-01-05 00:55:31 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:31.745745 | orchestrator | 2026-01-05 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:34.797259 | orchestrator | 2026-01-05 00:55:34 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:34.799733 | orchestrator | 2026-01-05 00:55:34 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:34.799802 | orchestrator | 2026-01-05 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:37.853448 | orchestrator | 2026-01-05 00:55:37 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:37.855266 | orchestrator | 2026-01-05 00:55:37 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:37.855320 | orchestrator | 2026-01-05 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:40.905841 | orchestrator | 2026-01-05 00:55:40 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:40.907864 | orchestrator | 2026-01-05 00:55:40 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:40.907924 | orchestrator | 2026-01-05 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:43.952223 | orchestrator | 2026-01-05 00:55:43 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:43.953901 | orchestrator | 2026-01-05 00:55:43 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state STARTED 2026-01-05 00:55:43.953964 | orchestrator | 2026-01-05 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:47.011480 | orchestrator | 2026-01-05 00:55:47 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:47.023998 | orchestrator | 2026-01-05 00:55:47 | INFO  | Task 88860598-6bb6-4bfc-919a-f9439d7ea8b2 is in state SUCCESS 2026-01-05 00:55:47.027626 | orchestrator | 2026-01-05 00:55:47.027734 | orchestrator | 2026-01-05 00:55:47.027748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:55:47.027760 | orchestrator | 2026-01-05 00:55:47.027769 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:55:47.027778 | orchestrator | Monday 05 January 2026 00:48:45 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-01-05 00:55:47.027787 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.027798 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.027806 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.027814 | orchestrator | 2026-01-05 00:55:47.027824 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:55:47.027935 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:00.374) 0:00:00.633 ******** 2026-01-05 00:55:47.027943 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-05 00:55:47.027950 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-05 00:55:47.027955 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-05 00:55:47.027961 | orchestrator | 2026-01-05 00:55:47.027966 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-05 00:55:47.028154 | orchestrator | 2026-01-05 00:55:47.028187 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 00:55:47.028193 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:00.830) 0:00:01.464 ******** 2026-01-05 00:55:47.028199 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.028205 | orchestrator | 2026-01-05 00:55:47.028210 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-05 00:55:47.028215 | orchestrator | Monday 05 January 2026 00:48:48 +0000 (0:00:01.206) 0:00:02.670 ******** 2026-01-05 00:55:47.028220 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.028226 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.028231 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.028237 | orchestrator | 2026-01-05 00:55:47.028242 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-05 00:55:47.028247 | orchestrator | Monday 05 January 2026 00:48:49 +0000 (0:00:00.942) 0:00:03.613 ******** 2026-01-05 00:55:47.028253 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.028258 | orchestrator | 2026-01-05 00:55:47.028263 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-05 00:55:47.028268 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:01.383) 0:00:04.996 ******** 2026-01-05 00:55:47.028273 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.028278 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.028283 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.028288 | orchestrator | 2026-01-05 00:55:47.028294 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-05 00:55:47.028299 | orchestrator | Monday 05 January 2026 00:48:51 +0000 (0:00:01.318) 0:00:06.315 ******** 2026-01-05 00:55:47.028304 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:55:47.028310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:55:47.028318 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:55:47.028327 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:55:47.028341 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:55:47.028352 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:55:47.028361 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:55:47.028369 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:55:47.028378 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:55:47.028387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:55:47.028396 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:55:47.028405 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:55:47.028414 | orchestrator | 2026-01-05 00:55:47.028423 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 00:55:47.028435 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:03.983) 0:00:10.298 ******** 2026-01-05 00:55:47.028447 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-05 00:55:47.028472 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-05 00:55:47.028481 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-05 00:55:47.028490 | orchestrator | 2026-01-05 00:55:47.028497 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 00:55:47.028506 | orchestrator | Monday 05 January 2026 00:48:56 +0000 (0:00:01.009) 0:00:11.307 ******** 2026-01-05 00:55:47.028540 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-05 00:55:47.028549 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-05 00:55:47.028556 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-05 00:55:47.028564 | orchestrator | 2026-01-05 00:55:47.028573 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 00:55:47.028581 | orchestrator | Monday 05 January 2026 00:48:58 +0000 (0:00:01.701) 0:00:13.009 ******** 2026-01-05 00:55:47.028590 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-05 00:55:47.028599 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.028624 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-05 00:55:47.028821 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.028835 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-05 00:55:47.028841 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.028847 | orchestrator | 2026-01-05 00:55:47.028853 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-05 00:55:47.028859 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:00.836) 0:00:13.845 ******** 2026-01-05 00:55:47.028868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.028882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.028891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.028905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.028916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.028956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.028968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.028979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.028988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.028997 | orchestrator | 2026-01-05 00:55:47.029006 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-05 00:55:47.029012 | orchestrator | Monday 05 January 2026 00:49:01 +0000 (0:00:01.908) 0:00:15.754 ******** 2026-01-05 00:55:47.029018 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.029024 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.029031 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.029036 | orchestrator | 2026-01-05 00:55:47.029042 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-05 00:55:47.029048 | orchestrator | Monday 05 January 2026 00:49:02 +0000 (0:00:01.413) 0:00:17.167 ******** 2026-01-05 00:55:47.029055 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-05 00:55:47.029061 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-05 00:55:47.029067 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-05 00:55:47.029073 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-05 00:55:47.029079 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-05 00:55:47.029086 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-05 00:55:47.029091 | orchestrator | 2026-01-05 00:55:47.029098 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-05 00:55:47.029104 | orchestrator | Monday 05 January 2026 00:49:05 +0000 (0:00:02.324) 0:00:19.492 ******** 2026-01-05 00:55:47.029116 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.029122 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.029128 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.029133 | orchestrator | 2026-01-05 00:55:47.029137 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-05 00:55:47.029143 | orchestrator | Monday 05 January 2026 00:49:07 +0000 (0:00:02.029) 0:00:21.522 ******** 2026-01-05 00:55:47.029148 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.029154 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.029159 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.029164 | orchestrator | 2026-01-05 00:55:47.029169 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-05 00:55:47.029174 | orchestrator | Monday 05 January 2026 00:49:09 +0000 (0:00:02.690) 0:00:24.213 ******** 2026-01-05 00:55:47.029179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.029195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.029201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.029208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:55:47.030246 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.030313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.030354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.030363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.030380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:55:47.030387 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.030410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.030418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.030424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.030432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:55:47.030445 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.030452 | orchestrator | 2026-01-05 00:55:47.030476 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-05 00:55:47.030485 | orchestrator | Monday 05 January 2026 00:49:11 +0000 (0:00:01.498) 0:00:25.711 ******** 2026-01-05 00:55:47.030491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.030595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:55:47.030602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.030618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:55:47.030683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.030699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4', '__omit_place_holder__132f9114f1783391a7ae531abb43c352568a21f4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:55:47.030712 | orchestrator | 2026-01-05 00:55:47.030717 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-05 00:55:47.030724 | orchestrator | Monday 05 January 2026 00:49:15 +0000 (0:00:04.034) 0:00:29.746 ******** 2026-01-05 00:55:47.030730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.030797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.030804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.030810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.030817 | orchestrator | 2026-01-05 00:55:47.030823 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-05 00:55:47.030829 | orchestrator | Monday 05 January 2026 00:49:18 +0000 (0:00:03.469) 0:00:33.215 ******** 2026-01-05 00:55:47.030836 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:55:47.030843 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:55:47.030849 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:55:47.030855 | orchestrator | 2026-01-05 00:55:47.030861 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-05 00:55:47.030892 | orchestrator | Monday 05 January 2026 00:49:21 +0000 (0:00:02.224) 0:00:35.440 ******** 2026-01-05 00:55:47.030900 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:55:47.030906 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:55:47.030913 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:55:47.031179 | orchestrator | 2026-01-05 00:55:47.031234 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-05 00:55:47.031243 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:03.822) 0:00:39.262 ******** 2026-01-05 00:55:47.031250 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.031257 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.031262 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.031268 | orchestrator | 2026-01-05 00:55:47.031274 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-05 00:55:47.031339 | orchestrator | Monday 05 January 2026 00:49:26 +0000 (0:00:01.404) 0:00:40.667 ******** 2026-01-05 00:55:47.031347 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:55:47.031355 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:55:47.031361 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:55:47.031367 | orchestrator | 2026-01-05 00:55:47.031373 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-05 00:55:47.031379 | orchestrator | Monday 05 January 2026 00:49:30 +0000 (0:00:03.761) 0:00:44.428 ******** 2026-01-05 00:55:47.031385 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:55:47.031392 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:55:47.031398 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:55:47.031405 | orchestrator | 2026-01-05 00:55:47.031412 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 00:55:47.031418 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:03.117) 0:00:47.546 ******** 2026-01-05 00:55:47.031425 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.031432 | orchestrator | 2026-01-05 00:55:47.031438 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-05 00:55:47.031444 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:00.580) 0:00:48.126 ******** 2026-01-05 00:55:47.031452 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-05 00:55:47.031510 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-05 00:55:47.031517 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-05 00:55:47.031524 | orchestrator | 2026-01-05 00:55:47.031531 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-05 00:55:47.031537 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:02.047) 0:00:50.174 ******** 2026-01-05 00:55:47.031545 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-05 00:55:47.031553 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-05 00:55:47.031560 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-05 00:55:47.031567 | orchestrator | 2026-01-05 00:55:47.031574 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-01-05 00:55:47.031796 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:02.272) 0:00:52.446 ******** 2026-01-05 00:55:47.031818 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.031826 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.031832 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.031838 | orchestrator | 2026-01-05 00:55:47.031844 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-01-05 00:55:47.031850 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.316) 0:00:52.762 ******** 2026-01-05 00:55:47.031856 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.031890 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.031899 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.031905 | orchestrator | 2026-01-05 00:55:47.031912 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 00:55:47.031918 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.267) 0:00:53.029 ******** 2026-01-05 00:55:47.031926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.031985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.031995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.032001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.032008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.032015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.032022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.032039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.032080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.032108 | orchestrator | 2026-01-05 00:55:47.032112 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 00:55:47.032117 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:03.296) 0:00:56.326 ******** 2026-01-05 00:55:47.032121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.032125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.032129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.032133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.032142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.032146 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.032154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.032158 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.032175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.032404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.032411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.032418 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.032425 | orchestrator | 2026-01-05 00:55:47.032431 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 00:55:47.032437 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:00.907) 0:00:57.233 ******** 2026-01-05 00:55:47.032444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.032516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.032525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.032533 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.032603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.032611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.032615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.032619 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.032623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.032627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.032638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.032642 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.032646 | orchestrator | 2026-01-05 00:55:47.032650 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-05 00:55:47.032656 | orchestrator | Monday 05 January 2026 00:49:43 +0000 (0:00:00.991) 0:00:58.225 ******** 2026-01-05 00:55:47.032662 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:55:47.032669 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:55:47.032678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:55:47.032684 | orchestrator | 2026-01-05 00:55:47.032742 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-05 00:55:47.032752 | orchestrator | Monday 05 January 2026 00:49:45 +0000 (0:00:01.597) 0:00:59.822 ******** 2026-01-05 00:55:47.032759 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:55:47.032811 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:55:47.032821 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:55:47.032827 | orchestrator | 2026-01-05 00:55:47.032833 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-05 00:55:47.032840 | orchestrator | Monday 05 January 2026 00:49:47 +0000 (0:00:01.843) 0:01:01.666 ******** 2026-01-05 00:55:47.032846 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:55:47.032853 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:55:47.032858 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:55:47.032934 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:55:47.032942 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.032949 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:55:47.032955 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.032961 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:55:47.032968 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.032974 | orchestrator | 2026-01-05 00:55:47.032980 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-05 00:55:47.032986 | orchestrator | Monday 05 January 2026 00:49:48 +0000 (0:00:01.422) 0:01:03.089 ******** 2026-01-05 00:55:47.032993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.033010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.033017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.033029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.033097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.033108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.033115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.033134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.033141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.033148 | orchestrator | 2026-01-05 00:55:47.033155 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-05 00:55:47.033161 | orchestrator | Monday 05 January 2026 00:49:51 +0000 (0:00:02.709) 0:01:05.798 ******** 2026-01-05 00:55:47.033168 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:55:47.033174 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:55:47.033180 | orchestrator | } 2026-01-05 00:55:47.033188 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:55:47.033194 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:55:47.033200 | orchestrator | } 2026-01-05 00:55:47.033207 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:55:47.033212 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:55:47.033219 | orchestrator | } 2026-01-05 00:55:47.033225 | orchestrator | 2026-01-05 00:55:47.033231 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:55:47.033237 | orchestrator | Monday 05 January 2026 00:49:51 +0000 (0:00:00.314) 0:01:06.113 ******** 2026-01-05 00:55:47.033247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.033324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.033336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.033347 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.033355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.033361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.033368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.033374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.033381 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.033392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.033868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.033893 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.033901 | orchestrator | 2026-01-05 00:55:47.033945 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-05 00:55:47.033954 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:01.073) 0:01:07.186 ******** 2026-01-05 00:55:47.033961 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.033965 | orchestrator | 2026-01-05 00:55:47.033969 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-05 00:55:47.034138 | orchestrator | Monday 05 January 2026 00:49:53 +0000 (0:00:00.536) 0:01:07.723 ******** 2026-01-05 00:55:47.034146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.034152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.034157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.034223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.034228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.034240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.034277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034292 | orchestrator | 2026-01-05 00:55:47.034296 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-05 00:55:47.034300 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:04.307) 0:01:12.031 ******** 2026-01-05 00:55:47.034304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.034308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.034312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034327 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.034362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.034368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.034373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.034378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.034409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034419 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.034544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034559 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.034563 | orchestrator | 2026-01-05 00:55:47.034567 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-05 00:55:47.034571 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:01.058) 0:01:13.090 ******** 2026-01-05 00:55:47.034575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.034584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.034589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.034594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.034633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.034638 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.034642 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.034646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.034650 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.034654 | orchestrator | 2026-01-05 00:55:47.034658 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-05 00:55:47.034662 | orchestrator | Monday 05 January 2026 00:49:59 +0000 (0:00:01.083) 0:01:14.173 ******** 2026-01-05 00:55:47.034673 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.034676 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.034680 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.034684 | orchestrator | 2026-01-05 00:55:47.034688 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-05 00:55:47.034692 | orchestrator | Monday 05 January 2026 00:50:01 +0000 (0:00:01.928) 0:01:16.101 ******** 2026-01-05 00:55:47.034695 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.034699 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.034703 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.034707 | orchestrator | 2026-01-05 00:55:47.034711 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-05 00:55:47.034715 | orchestrator | Monday 05 January 2026 00:50:04 +0000 (0:00:02.530) 0:01:18.632 ******** 2026-01-05 00:55:47.034719 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.034723 | orchestrator | 2026-01-05 00:55:47.034727 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-05 00:55:47.034730 | orchestrator | Monday 05 January 2026 00:50:05 +0000 (0:00:01.100) 0:01:19.733 ******** 2026-01-05 00:55:47.034778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.034789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.034817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.034895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.034914 | orchestrator | 2026-01-05 00:55:47.034921 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-05 00:55:47.034928 | orchestrator | Monday 05 January 2026 00:50:10 +0000 (0:00:05.335) 0:01:25.068 ******** 2026-01-05 00:55:47.034935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.035189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.035212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.035216 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.035221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.035226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.035238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.035242 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.035296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.035303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.035307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.035311 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.035315 | orchestrator | 2026-01-05 00:55:47.035319 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-05 00:55:47.035323 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:00.816) 0:01:25.884 ******** 2026-01-05 00:55:47.035328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.035353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.035362 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.035368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.035375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.035381 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.035388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.035394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.035400 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.035407 | orchestrator | 2026-01-05 00:55:47.035414 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-05 00:55:47.035421 | orchestrator | Monday 05 January 2026 00:50:12 +0000 (0:00:01.142) 0:01:27.026 ******** 2026-01-05 00:55:47.035427 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.035433 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.035476 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.035484 | orchestrator | 2026-01-05 00:55:47.035491 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-05 00:55:47.035904 | orchestrator | Monday 05 January 2026 00:50:14 +0000 (0:00:01.420) 0:01:28.447 ******** 2026-01-05 00:55:47.035924 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.035928 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.035932 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.035936 | orchestrator | 2026-01-05 00:55:47.035939 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-05 00:55:47.035943 | orchestrator | Monday 05 January 2026 00:50:16 +0000 (0:00:02.660) 0:01:31.108 ******** 2026-01-05 00:55:47.035947 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.035951 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.035955 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.035959 | orchestrator | 2026-01-05 00:55:47.036017 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-05 00:55:47.036023 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:00.262) 0:01:31.370 ******** 2026-01-05 00:55:47.036027 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.036031 | orchestrator | 2026-01-05 00:55:47.036035 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-05 00:55:47.036039 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:00.752) 0:01:32.123 ******** 2026-01-05 00:55:47.036061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:55:47.036078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:55:47.036225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:55:47.036231 | orchestrator | 2026-01-05 00:55:47.036235 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-05 00:55:47.036239 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:06.347) 0:01:38.471 ******** 2026-01-05 00:55:47.036248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:55:47.036252 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.036299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:55:47.036312 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.036316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:55:47.036320 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.036324 | orchestrator | 2026-01-05 00:55:47.036327 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-05 00:55:47.036331 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:01.885) 0:01:40.356 ******** 2026-01-05 00:55:47.036336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:55:47.036343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:55:47.036348 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.036352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:55:47.036356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:55:47.036360 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.036367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:55:47.036403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:55:47.036417 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.036423 | orchestrator | 2026-01-05 00:55:47.036430 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-05 00:55:47.036436 | orchestrator | Monday 05 January 2026 00:50:28 +0000 (0:00:02.117) 0:01:42.473 ******** 2026-01-05 00:55:47.036442 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.036448 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.036506 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.036513 | orchestrator | 2026-01-05 00:55:47.036519 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-05 00:55:47.036524 | orchestrator | Monday 05 January 2026 00:50:28 +0000 (0:00:00.425) 0:01:42.898 ******** 2026-01-05 00:55:47.036551 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.036589 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.036595 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.036599 | orchestrator | 2026-01-05 00:55:47.036603 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-05 00:55:47.036607 | orchestrator | Monday 05 January 2026 00:50:29 +0000 (0:00:01.077) 0:01:43.975 ******** 2026-01-05 00:55:47.036611 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.036615 | orchestrator | 2026-01-05 00:55:47.036619 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-05 00:55:47.036807 | orchestrator | Monday 05 January 2026 00:50:30 +0000 (0:00:00.821) 0:01:44.796 ******** 2026-01-05 00:55:47.036827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.036835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.036843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.036930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.036943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.036947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.036952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.036956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037020 | orchestrator | 2026-01-05 00:55:47.037024 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-05 00:55:47.037028 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:04.022) 0:01:48.819 ******** 2026-01-05 00:55:47.037032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.037044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037085 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.037089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.037093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037840 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.037846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.037851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.037868 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.037872 | orchestrator | 2026-01-05 00:55:47.037875 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-05 00:55:47.037880 | orchestrator | Monday 05 January 2026 00:50:35 +0000 (0:00:01.106) 0:01:49.925 ******** 2026-01-05 00:55:47.037886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.037897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.037904 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.037912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.037921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.037928 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.037934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.037940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.037946 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.037952 | orchestrator | 2026-01-05 00:55:47.037958 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-05 00:55:47.037963 | orchestrator | Monday 05 January 2026 00:50:36 +0000 (0:00:01.279) 0:01:51.205 ******** 2026-01-05 00:55:47.037969 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.037975 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.037981 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.037987 | orchestrator | 2026-01-05 00:55:47.037992 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-05 00:55:47.037997 | orchestrator | Monday 05 January 2026 00:50:38 +0000 (0:00:01.651) 0:01:52.856 ******** 2026-01-05 00:55:47.038003 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.038008 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.038051 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.038057 | orchestrator | 2026-01-05 00:55:47.038064 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-05 00:55:47.038077 | orchestrator | Monday 05 January 2026 00:50:40 +0000 (0:00:02.399) 0:01:55.256 ******** 2026-01-05 00:55:47.038083 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.038090 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.038097 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.038103 | orchestrator | 2026-01-05 00:55:47.038110 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-05 00:55:47.038116 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:00.318) 0:01:55.574 ******** 2026-01-05 00:55:47.038123 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.038129 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.038135 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.038143 | orchestrator | 2026-01-05 00:55:47.038147 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-05 00:55:47.038150 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:00.355) 0:01:55.930 ******** 2026-01-05 00:55:47.038154 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.038158 | orchestrator | 2026-01-05 00:55:47.038162 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-05 00:55:47.038165 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:01.027) 0:01:56.958 ******** 2026-01-05 00:55:47.038171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.038185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:55:47.038190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.038203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.038212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:55:47.038221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:55:47.038233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038302 | orchestrator | 2026-01-05 00:55:47.038306 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-05 00:55:47.038310 | orchestrator | Monday 05 January 2026 00:50:46 +0000 (0:00:04.252) 0:02:01.210 ******** 2026-01-05 00:55:47.038314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.038352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:55:47.038359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038403 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.038410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.038416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:55:47.038423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038487 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.038494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.038500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:55:47.038511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.038558 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.038564 | orchestrator | 2026-01-05 00:55:47.038568 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-05 00:55:47.038571 | orchestrator | Monday 05 January 2026 00:50:47 +0000 (0:00:00.870) 0:02:02.080 ******** 2026-01-05 00:55:47.038576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.038583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.038587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.038592 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.038595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.038599 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.038606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.038613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.038617 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.038620 | orchestrator | 2026-01-05 00:55:47.038628 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-05 00:55:47.038632 | orchestrator | Monday 05 January 2026 00:50:49 +0000 (0:00:01.563) 0:02:03.644 ******** 2026-01-05 00:55:47.038636 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.038640 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.038644 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.038647 | orchestrator | 2026-01-05 00:55:47.038651 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-05 00:55:47.038655 | orchestrator | Monday 05 January 2026 00:50:50 +0000 (0:00:01.241) 0:02:04.885 ******** 2026-01-05 00:55:47.038659 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.038662 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.038666 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.038670 | orchestrator | 2026-01-05 00:55:47.038674 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-05 00:55:47.038677 | orchestrator | Monday 05 January 2026 00:50:52 +0000 (0:00:02.077) 0:02:06.963 ******** 2026-01-05 00:55:47.038681 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.038685 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.038689 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.038692 | orchestrator | 2026-01-05 00:55:47.038696 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-05 00:55:47.038700 | orchestrator | Monday 05 January 2026 00:50:52 +0000 (0:00:00.308) 0:02:07.271 ******** 2026-01-05 00:55:47.038704 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.038708 | orchestrator | 2026-01-05 00:55:47.038711 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-05 00:55:47.038715 | orchestrator | Monday 05 January 2026 00:50:54 +0000 (0:00:01.117) 0:02:08.389 ******** 2026-01-05 00:55:47.038720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:55:47.038735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.038740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:55:47.038750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.038759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:55:47.038781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.038789 | orchestrator | 2026-01-05 00:55:47.038796 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-05 00:55:47.038800 | orchestrator | Monday 05 January 2026 00:50:58 +0000 (0:00:04.827) 0:02:13.216 ******** 2026-01-05 00:55:47.038807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:55:47.038816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.038828 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.038839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:55:47.038846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.038856 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.038873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:55:47.038881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.038891 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.038898 | orchestrator | 2026-01-05 00:55:47.038904 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-05 00:55:47.038910 | orchestrator | Monday 05 January 2026 00:51:03 +0000 (0:00:04.453) 0:02:17.670 ******** 2026-01-05 00:55:47.038920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:55:47.038930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:55:47.038937 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.038943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:55:47.039671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:55:47.040620 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.040640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:55:47.040645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:55:47.040658 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.040662 | orchestrator | 2026-01-05 00:55:47.040666 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-05 00:55:47.040670 | orchestrator | Monday 05 January 2026 00:51:08 +0000 (0:00:05.477) 0:02:23.147 ******** 2026-01-05 00:55:47.040674 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.040678 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.040681 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.040685 | orchestrator | 2026-01-05 00:55:47.040689 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-05 00:55:47.040693 | orchestrator | Monday 05 January 2026 00:51:10 +0000 (0:00:01.446) 0:02:24.594 ******** 2026-01-05 00:55:47.040696 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.040700 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.040704 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.040708 | orchestrator | 2026-01-05 00:55:47.040712 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-05 00:55:47.040715 | orchestrator | Monday 05 January 2026 00:51:12 +0000 (0:00:02.384) 0:02:26.978 ******** 2026-01-05 00:55:47.040719 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.040723 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.040727 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.040730 | orchestrator | 2026-01-05 00:55:47.040734 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-05 00:55:47.040738 | orchestrator | Monday 05 January 2026 00:51:13 +0000 (0:00:00.379) 0:02:27.358 ******** 2026-01-05 00:55:47.040742 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.040746 | orchestrator | 2026-01-05 00:55:47.040750 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-05 00:55:47.040756 | orchestrator | Monday 05 January 2026 00:51:14 +0000 (0:00:01.649) 0:02:29.008 ******** 2026-01-05 00:55:47.040760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.040766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.040782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.040789 | orchestrator | 2026-01-05 00:55:47.040793 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-05 00:55:47.040797 | orchestrator | Monday 05 January 2026 00:51:19 +0000 (0:00:04.779) 0:02:33.787 ******** 2026-01-05 00:55:47.040801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.040805 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.040809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.040813 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.040818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.040823 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.040826 | orchestrator | 2026-01-05 00:55:47.040830 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-05 00:55:47.040834 | orchestrator | Monday 05 January 2026 00:51:19 +0000 (0:00:00.450) 0:02:34.238 ******** 2026-01-05 00:55:47.040838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.040844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.040854 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.040865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.040871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.040878 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.040885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.040891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.040897 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.040903 | orchestrator | 2026-01-05 00:55:47.040909 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-05 00:55:47.040915 | orchestrator | Monday 05 January 2026 00:51:20 +0000 (0:00:00.732) 0:02:34.970 ******** 2026-01-05 00:55:47.040921 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.040928 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.040934 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.040940 | orchestrator | 2026-01-05 00:55:47.040947 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-05 00:55:47.040953 | orchestrator | Monday 05 January 2026 00:51:22 +0000 (0:00:01.639) 0:02:36.610 ******** 2026-01-05 00:55:47.040959 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.040965 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.040972 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.040979 | orchestrator | 2026-01-05 00:55:47.040985 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-05 00:55:47.040992 | orchestrator | Monday 05 January 2026 00:51:24 +0000 (0:00:02.108) 0:02:38.719 ******** 2026-01-05 00:55:47.040998 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041005 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041011 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041017 | orchestrator | 2026-01-05 00:55:47.041024 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-05 00:55:47.041031 | orchestrator | Monday 05 January 2026 00:51:24 +0000 (0:00:00.374) 0:02:39.093 ******** 2026-01-05 00:55:47.041037 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.041044 | orchestrator | 2026-01-05 00:55:47.041050 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-05 00:55:47.041057 | orchestrator | Monday 05 January 2026 00:51:25 +0000 (0:00:01.103) 0:02:40.197 ******** 2026-01-05 00:55:47.041076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:55:47.041090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:55:47.041106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:55:47.041119 | orchestrator | 2026-01-05 00:55:47.041126 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-05 00:55:47.041134 | orchestrator | Monday 05 January 2026 00:51:29 +0000 (0:00:03.730) 0:02:43.927 ******** 2026-01-05 00:55:47.041144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:55:47.041155 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:55:47.041169 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:55:47.041184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041189 | orchestrator | 2026-01-05 00:55:47.041193 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-05 00:55:47.041198 | orchestrator | Monday 05 January 2026 00:51:30 +0000 (0:00:01.173) 0:02:45.101 ******** 2026-01-05 00:55:47.041203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-05 00:55:47.041212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:47.041219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-05 00:55:47.041226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:47.041231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:55:47.041237 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-05 00:55:47.041246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:47.041251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-05 00:55:47.041255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:47.041266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:55:47.041270 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-05 00:55:47.041280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:47.041285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-05 00:55:47.041290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:47.041297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:55:47.041302 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041306 | orchestrator | 2026-01-05 00:55:47.041310 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-05 00:55:47.041314 | orchestrator | Monday 05 January 2026 00:51:31 +0000 (0:00:01.130) 0:02:46.232 ******** 2026-01-05 00:55:47.041317 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.041321 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.041325 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.041329 | orchestrator | 2026-01-05 00:55:47.041332 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-05 00:55:47.041336 | orchestrator | Monday 05 January 2026 00:51:33 +0000 (0:00:01.547) 0:02:47.779 ******** 2026-01-05 00:55:47.041340 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.041344 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.041347 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.041351 | orchestrator | 2026-01-05 00:55:47.041355 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-05 00:55:47.041359 | orchestrator | Monday 05 January 2026 00:51:36 +0000 (0:00:02.855) 0:02:50.635 ******** 2026-01-05 00:55:47.041362 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041366 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041370 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041374 | orchestrator | 2026-01-05 00:55:47.041378 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-05 00:55:47.041381 | orchestrator | Monday 05 January 2026 00:51:36 +0000 (0:00:00.398) 0:02:51.033 ******** 2026-01-05 00:55:47.041385 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041389 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041393 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041399 | orchestrator | 2026-01-05 00:55:47.041403 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-05 00:55:47.041407 | orchestrator | Monday 05 January 2026 00:51:37 +0000 (0:00:00.574) 0:02:51.608 ******** 2026-01-05 00:55:47.041410 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.041414 | orchestrator | 2026-01-05 00:55:47.041418 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-05 00:55:47.041422 | orchestrator | Monday 05 January 2026 00:51:39 +0000 (0:00:02.009) 0:02:53.617 ******** 2026-01-05 00:55:47.041428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 00:55:47.041432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:47.041437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:47.041448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 00:55:47.041475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:47.041482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:47.041492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 00:55:47.041499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:47.041509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:47.041517 | orchestrator | 2026-01-05 00:55:47.041523 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-05 00:55:47.041530 | orchestrator | Monday 05 January 2026 00:51:43 +0000 (0:00:04.285) 0:02:57.903 ******** 2026-01-05 00:55:47.041537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 00:55:47.041548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:47.041558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:47.041565 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 00:55:47.041582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:47.041590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:47.041600 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 00:55:47.041617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:47.041622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:47.041625 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041629 | orchestrator | 2026-01-05 00:55:47.041633 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-05 00:55:47.041637 | orchestrator | Monday 05 January 2026 00:51:44 +0000 (0:00:00.678) 0:02:58.581 ******** 2026-01-05 00:55:47.041641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-05 00:55:47.041645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-05 00:55:47.041650 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-05 00:55:47.041665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-05 00:55:47.041669 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-05 00:55:47.041676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-05 00:55:47.041680 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041684 | orchestrator | 2026-01-05 00:55:47.041688 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-05 00:55:47.041692 | orchestrator | Monday 05 January 2026 00:51:45 +0000 (0:00:01.166) 0:02:59.748 ******** 2026-01-05 00:55:47.041695 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.041699 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.041703 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.041707 | orchestrator | 2026-01-05 00:55:47.041710 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-05 00:55:47.041714 | orchestrator | Monday 05 January 2026 00:51:46 +0000 (0:00:01.129) 0:03:00.878 ******** 2026-01-05 00:55:47.041718 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.041722 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.041726 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.041729 | orchestrator | 2026-01-05 00:55:47.041733 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-05 00:55:47.041737 | orchestrator | Monday 05 January 2026 00:51:48 +0000 (0:00:01.990) 0:03:02.869 ******** 2026-01-05 00:55:47.041741 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041744 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041748 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041752 | orchestrator | 2026-01-05 00:55:47.041756 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-05 00:55:47.041760 | orchestrator | Monday 05 January 2026 00:51:48 +0000 (0:00:00.332) 0:03:03.201 ******** 2026-01-05 00:55:47.041764 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.041767 | orchestrator | 2026-01-05 00:55:47.041771 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-05 00:55:47.041775 | orchestrator | Monday 05 January 2026 00:51:50 +0000 (0:00:01.258) 0:03:04.460 ******** 2026-01-05 00:55:47.041781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.041791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.041796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.041800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.041806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.041810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.041817 | orchestrator | 2026-01-05 00:55:47.041821 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-05 00:55:47.041825 | orchestrator | Monday 05 January 2026 00:51:55 +0000 (0:00:05.060) 0:03:09.521 ******** 2026-01-05 00:55:47.041833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.041837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.041842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.041847 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.041860 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.041871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.041875 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041878 | orchestrator | 2026-01-05 00:55:47.041882 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-05 00:55:47.041886 | orchestrator | Monday 05 January 2026 00:51:56 +0000 (0:00:00.882) 0:03:10.403 ******** 2026-01-05 00:55:47.041890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.041894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.041898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.041902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.041906 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.041910 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.041914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.041920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.041926 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.041930 | orchestrator | 2026-01-05 00:55:47.041934 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-05 00:55:47.041937 | orchestrator | Monday 05 January 2026 00:51:57 +0000 (0:00:00.977) 0:03:11.381 ******** 2026-01-05 00:55:47.041941 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.041945 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.041949 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.041953 | orchestrator | 2026-01-05 00:55:47.041956 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-05 00:55:47.041960 | orchestrator | Monday 05 January 2026 00:51:58 +0000 (0:00:01.766) 0:03:13.148 ******** 2026-01-05 00:55:47.041964 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.041968 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.041971 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.041975 | orchestrator | 2026-01-05 00:55:47.041979 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-05 00:55:47.041983 | orchestrator | Monday 05 January 2026 00:52:00 +0000 (0:00:02.096) 0:03:15.245 ******** 2026-01-05 00:55:47.041987 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.041990 | orchestrator | 2026-01-05 00:55:47.041994 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-05 00:55:47.041998 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:01.107) 0:03:16.353 ******** 2026-01-05 00:55:47.042006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.042043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.042060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.042081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042105 | orchestrator | 2026-01-05 00:55:47.042109 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-05 00:55:47.042113 | orchestrator | Monday 05 January 2026 00:52:05 +0000 (0:00:03.498) 0:03:19.851 ******** 2026-01-05 00:55:47.042310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.042371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042424 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.042430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.042445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042490 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.042499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.042504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.042523 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.042528 | orchestrator | 2026-01-05 00:55:47.042534 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-05 00:55:47.042542 | orchestrator | Monday 05 January 2026 00:52:06 +0000 (0:00:01.208) 0:03:21.060 ******** 2026-01-05 00:55:47.042557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.042567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.042582 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.042586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.042591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.042597 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.042604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.042610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.042618 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.042625 | orchestrator | 2026-01-05 00:55:47.042632 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-05 00:55:47.042639 | orchestrator | Monday 05 January 2026 00:52:08 +0000 (0:00:01.606) 0:03:22.666 ******** 2026-01-05 00:55:47.042646 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.042653 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.042660 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.042667 | orchestrator | 2026-01-05 00:55:47.042675 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-05 00:55:47.042687 | orchestrator | Monday 05 January 2026 00:52:09 +0000 (0:00:01.195) 0:03:23.862 ******** 2026-01-05 00:55:47.042694 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.042702 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.042710 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.042715 | orchestrator | 2026-01-05 00:55:47.042720 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-05 00:55:47.042724 | orchestrator | Monday 05 January 2026 00:52:11 +0000 (0:00:02.275) 0:03:26.137 ******** 2026-01-05 00:55:47.042728 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.042732 | orchestrator | 2026-01-05 00:55:47.042736 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-05 00:55:47.042740 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:01.522) 0:03:27.660 ******** 2026-01-05 00:55:47.042746 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:55:47.042751 | orchestrator | 2026-01-05 00:55:47.042756 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-05 00:55:47.042763 | orchestrator | Monday 05 January 2026 00:52:16 +0000 (0:00:03.271) 0:03:30.931 ******** 2026-01-05 00:55:47.042778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:47.042794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:47.042803 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.042814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:47.042823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:47.042842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:47.042848 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.042855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:47.042862 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.042870 | orchestrator | 2026-01-05 00:55:47.042878 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-05 00:55:47.042887 | orchestrator | Monday 05 January 2026 00:52:19 +0000 (0:00:02.430) 0:03:33.362 ******** 2026-01-05 00:55:47.042904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:47.042921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:47.042930 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:47.043040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:47.043047 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:47.043072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:47.043079 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043087 | orchestrator | 2026-01-05 00:55:47.043094 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-05 00:55:47.043102 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:03.758) 0:03:37.121 ******** 2026-01-05 00:55:47.043114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:47.043124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:47.043133 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:47.043160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:47.043168 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:47.043184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:47.043189 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043194 | orchestrator | 2026-01-05 00:55:47.043199 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-05 00:55:47.043204 | orchestrator | Monday 05 January 2026 00:52:26 +0000 (0:00:04.165) 0:03:41.286 ******** 2026-01-05 00:55:47.043208 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.043212 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.043219 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.043226 | orchestrator | 2026-01-05 00:55:47.043233 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-05 00:55:47.043240 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:02.344) 0:03:43.631 ******** 2026-01-05 00:55:47.043248 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043256 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043263 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043268 | orchestrator | 2026-01-05 00:55:47.043272 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-05 00:55:47.043276 | orchestrator | Monday 05 January 2026 00:52:30 +0000 (0:00:01.539) 0:03:45.170 ******** 2026-01-05 00:55:47.043280 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043287 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043294 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043303 | orchestrator | 2026-01-05 00:55:47.043316 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-05 00:55:47.043324 | orchestrator | Monday 05 January 2026 00:52:31 +0000 (0:00:00.275) 0:03:45.445 ******** 2026-01-05 00:55:47.043339 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.043347 | orchestrator | 2026-01-05 00:55:47.043355 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-05 00:55:47.043360 | orchestrator | Monday 05 January 2026 00:52:32 +0000 (0:00:01.241) 0:03:46.687 ******** 2026-01-05 00:55:47.043366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:55:47.043378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:55:47.043385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:55:47.043390 | orchestrator | 2026-01-05 00:55:47.043394 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-05 00:55:47.043400 | orchestrator | Monday 05 January 2026 00:52:33 +0000 (0:00:01.504) 0:03:48.191 ******** 2026-01-05 00:55:47.043407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:55:47.043418 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:55:47.043452 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:55:47.043548 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043555 | orchestrator | 2026-01-05 00:55:47.043562 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-05 00:55:47.043570 | orchestrator | Monday 05 January 2026 00:52:34 +0000 (0:00:00.369) 0:03:48.561 ******** 2026-01-05 00:55:47.043577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:55:47.043594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:55:47.043604 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043610 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:55:47.043624 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043633 | orchestrator | 2026-01-05 00:55:47.043639 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-05 00:55:47.043646 | orchestrator | Monday 05 January 2026 00:52:35 +0000 (0:00:00.786) 0:03:49.348 ******** 2026-01-05 00:55:47.043654 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043662 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043670 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043677 | orchestrator | 2026-01-05 00:55:47.043684 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-05 00:55:47.043690 | orchestrator | Monday 05 January 2026 00:52:35 +0000 (0:00:00.427) 0:03:49.776 ******** 2026-01-05 00:55:47.043698 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043704 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043711 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043718 | orchestrator | 2026-01-05 00:55:47.043726 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-05 00:55:47.043734 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:01.189) 0:03:50.965 ******** 2026-01-05 00:55:47.043742 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.043749 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.043768 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.043776 | orchestrator | 2026-01-05 00:55:47.043783 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-05 00:55:47.043791 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:00.270) 0:03:51.235 ******** 2026-01-05 00:55:47.043798 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.043804 | orchestrator | 2026-01-05 00:55:47.043811 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-05 00:55:47.043817 | orchestrator | Monday 05 January 2026 00:52:38 +0000 (0:00:01.561) 0:03:52.797 ******** 2026-01-05 00:55:47.043829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.043837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.043850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-05 00:55:47.043855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.043865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-05 00:55:47.043873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.043880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.043890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.043895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-05 00:55:47.043904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-05 00:55:47.043916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.043923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 00:55:47.043928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.043935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.043940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.043949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.043956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.043962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-05 00:55:47.043967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.043975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.043984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.043989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-05 00:55:47.043996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 00:55:47.044009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.044014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-05 00:55:47.044024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-05 00:55:47.044063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 00:55:47.044078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.044253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-05 00:55:47.044278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.044335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044342 | orchestrator | 2026-01-05 00:55:47.044349 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-05 00:55:47.044360 | orchestrator | Monday 05 January 2026 00:52:44 +0000 (0:00:05.658) 0:03:58.456 ******** 2026-01-05 00:55:47.044375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.044383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-05 00:55:47.044449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-05 00:55:47.044474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 00:55:47.044536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-05 00:55:47.044570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.044625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044632 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.044636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.044641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-05 00:55:47.044653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-05 00:55:47.044689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 00:55:47.044712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-05 00:55:47.044787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.044796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.044851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-05 00:55:47.044856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-05 00:55:47.044868 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.044876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 00:55:47.044936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.044940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.044947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-05 00:55:47.044957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-05 00:55:47.044961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:47.045009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:47.045013 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.045017 | orchestrator | 2026-01-05 00:55:47.045021 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-05 00:55:47.045025 | orchestrator | Monday 05 January 2026 00:52:47 +0000 (0:00:03.279) 0:04:01.735 ******** 2026-01-05 00:55:47.045030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.045035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.045040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.045049 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.045055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.045059 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.045063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.045067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.045071 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.045075 | orchestrator | 2026-01-05 00:55:47.045079 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-05 00:55:47.045083 | orchestrator | Monday 05 January 2026 00:52:50 +0000 (0:00:03.041) 0:04:04.777 ******** 2026-01-05 00:55:47.045087 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.045091 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.045095 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.045099 | orchestrator | 2026-01-05 00:55:47.045103 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-05 00:55:47.045107 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:01.290) 0:04:06.067 ******** 2026-01-05 00:55:47.045111 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.045115 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.045119 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.045123 | orchestrator | 2026-01-05 00:55:47.045126 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-05 00:55:47.045138 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:02.015) 0:04:08.082 ******** 2026-01-05 00:55:47.045142 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.045146 | orchestrator | 2026-01-05 00:55:47.045150 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-05 00:55:47.045154 | orchestrator | Monday 05 January 2026 00:52:55 +0000 (0:00:01.410) 0:04:09.493 ******** 2026-01-05 00:55:47.045193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-05 00:55:47.045200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-05 00:55:47.045215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-05 00:55:47.045220 | orchestrator | 2026-01-05 00:55:47.045225 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-05 00:55:47.045229 | orchestrator | Monday 05 January 2026 00:52:58 +0000 (0:00:03.227) 0:04:12.720 ******** 2026-01-05 00:55:47.045260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-05 00:55:47.045266 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.045271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-05 00:55:47.045280 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.045288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-05 00:55:47.045295 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.045304 | orchestrator | 2026-01-05 00:55:47.045316 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-05 00:55:47.045322 | orchestrator | Monday 05 January 2026 00:52:58 +0000 (0:00:00.467) 0:04:13.188 ******** 2026-01-05 00:55:47.045329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.045348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.045356 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.045363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.045370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.045376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.045434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.045444 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.045451 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.045474 | orchestrator | 2026-01-05 00:55:47.045480 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-05 00:55:47.045486 | orchestrator | Monday 05 January 2026 00:52:59 +0000 (0:00:01.085) 0:04:14.274 ******** 2026-01-05 00:55:47.045492 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.045498 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.045504 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.045510 | orchestrator | 2026-01-05 00:55:47.045517 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-05 00:55:47.045531 | orchestrator | Monday 05 January 2026 00:53:01 +0000 (0:00:01.687) 0:04:15.961 ******** 2026-01-05 00:55:47.045548 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.045555 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.045562 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.045569 | orchestrator | 2026-01-05 00:55:47.045575 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-05 00:55:47.045581 | orchestrator | Monday 05 January 2026 00:53:03 +0000 (0:00:02.357) 0:04:18.318 ******** 2026-01-05 00:55:47.045588 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.045595 | orchestrator | 2026-01-05 00:55:47.045601 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-05 00:55:47.045607 | orchestrator | Monday 05 January 2026 00:53:05 +0000 (0:00:01.680) 0:04:19.999 ******** 2026-01-05 00:55:47.045616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.045629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.045698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.045719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.045728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.045804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.045837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045855 | orchestrator | 2026-01-05 00:55:47.045863 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-05 00:55:47.045867 | orchestrator | Monday 05 January 2026 00:53:12 +0000 (0:00:06.518) 0:04:26.518 ******** 2026-01-05 00:55:47.045891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.045901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.045905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045915 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.045923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.045940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.045949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.045957 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.045964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.045969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.045999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.046007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.046100 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.046113 | orchestrator | 2026-01-05 00:55:47.046119 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-05 00:55:47.046125 | orchestrator | Monday 05 January 2026 00:53:13 +0000 (0:00:00.916) 0:04:27.435 ******** 2026-01-05 00:55:47.046133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046159 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.046163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046192 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.046196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.046238 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.046242 | orchestrator | 2026-01-05 00:55:47.046246 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-05 00:55:47.046250 | orchestrator | Monday 05 January 2026 00:53:14 +0000 (0:00:01.203) 0:04:28.638 ******** 2026-01-05 00:55:47.046254 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.046258 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.046261 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.046265 | orchestrator | 2026-01-05 00:55:47.046269 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-05 00:55:47.046273 | orchestrator | Monday 05 January 2026 00:53:16 +0000 (0:00:01.690) 0:04:30.329 ******** 2026-01-05 00:55:47.046277 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.046280 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.046284 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.046288 | orchestrator | 2026-01-05 00:55:47.046292 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-05 00:55:47.046296 | orchestrator | Monday 05 January 2026 00:53:18 +0000 (0:00:02.092) 0:04:32.422 ******** 2026-01-05 00:55:47.046300 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.046304 | orchestrator | 2026-01-05 00:55:47.046307 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-05 00:55:47.046311 | orchestrator | Monday 05 January 2026 00:53:19 +0000 (0:00:01.645) 0:04:34.067 ******** 2026-01-05 00:55:47.046315 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-05 00:55:47.046319 | orchestrator | 2026-01-05 00:55:47.046323 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-05 00:55:47.046327 | orchestrator | Monday 05 January 2026 00:53:20 +0000 (0:00:01.101) 0:04:35.168 ******** 2026-01-05 00:55:47.046331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:55:47.046335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:55:47.046392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:55:47.046406 | orchestrator | 2026-01-05 00:55:47.046410 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-05 00:55:47.046417 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:04.199) 0:04:39.367 ******** 2026-01-05 00:55:47.046424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046431 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.046485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046494 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.046501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046508 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.046515 | orchestrator | 2026-01-05 00:55:47.046521 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-05 00:55:47.046528 | orchestrator | Monday 05 January 2026 00:53:26 +0000 (0:00:01.661) 0:04:41.029 ******** 2026-01-05 00:55:47.046536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:55:47.046544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:55:47.046551 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.046558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:55:47.046565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:55:47.046581 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.046588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:55:47.046595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:55:47.046602 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.046609 | orchestrator | 2026-01-05 00:55:47.046621 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:55:47.046628 | orchestrator | Monday 05 January 2026 00:53:29 +0000 (0:00:02.577) 0:04:43.606 ******** 2026-01-05 00:55:47.046634 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.046640 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.046648 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.046654 | orchestrator | 2026-01-05 00:55:47.046661 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:55:47.046667 | orchestrator | Monday 05 January 2026 00:53:31 +0000 (0:00:02.679) 0:04:46.285 ******** 2026-01-05 00:55:47.046674 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.046679 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.046683 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.046688 | orchestrator | 2026-01-05 00:55:47.046692 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-05 00:55:47.046697 | orchestrator | Monday 05 January 2026 00:53:34 +0000 (0:00:03.001) 0:04:49.286 ******** 2026-01-05 00:55:47.046702 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-05 00:55:47.046706 | orchestrator | 2026-01-05 00:55:47.046711 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-05 00:55:47.046715 | orchestrator | Monday 05 January 2026 00:53:36 +0000 (0:00:01.417) 0:04:50.704 ******** 2026-01-05 00:55:47.046721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046726 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.046752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046757 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.046762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046772 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.046777 | orchestrator | 2026-01-05 00:55:47.046782 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-05 00:55:47.046786 | orchestrator | Monday 05 January 2026 00:53:38 +0000 (0:00:02.112) 0:04:52.816 ******** 2026-01-05 00:55:47.046791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046796 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.046801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046808 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.046819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:55:47.046826 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.046833 | orchestrator | 2026-01-05 00:55:47.046839 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-05 00:55:47.046846 | orchestrator | Monday 05 January 2026 00:53:40 +0000 (0:00:02.335) 0:04:55.151 ******** 2026-01-05 00:55:47.046852 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.046859 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.046864 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.046871 | orchestrator | 2026-01-05 00:55:47.046877 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:55:47.046883 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:02.026) 0:04:57.179 ******** 2026-01-05 00:55:47.046890 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.046896 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.046902 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.046908 | orchestrator | 2026-01-05 00:55:47.046914 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:55:47.046920 | orchestrator | Monday 05 January 2026 00:53:45 +0000 (0:00:02.444) 0:04:59.623 ******** 2026-01-05 00:55:47.046926 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.046933 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.046940 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.046946 | orchestrator | 2026-01-05 00:55:47.046953 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-05 00:55:47.046958 | orchestrator | Monday 05 January 2026 00:53:48 +0000 (0:00:03.085) 0:05:02.709 ******** 2026-01-05 00:55:47.046984 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-05 00:55:47.046998 | orchestrator | 2026-01-05 00:55:47.047004 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-05 00:55:47.047011 | orchestrator | Monday 05 January 2026 00:53:49 +0000 (0:00:00.882) 0:05:03.592 ******** 2026-01-05 00:55:47.047018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:55:47.047026 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:55:47.047034 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:55:47.047042 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047045 | orchestrator | 2026-01-05 00:55:47.047049 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-05 00:55:47.047054 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:01.939) 0:05:05.531 ******** 2026-01-05 00:55:47.047062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:55:47.047066 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:55:47.047073 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:55:47.047085 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047088 | orchestrator | 2026-01-05 00:55:47.047092 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-05 00:55:47.047096 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:01.246) 0:05:06.778 ******** 2026-01-05 00:55:47.047100 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047104 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047125 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047130 | orchestrator | 2026-01-05 00:55:47.047133 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:55:47.047137 | orchestrator | Monday 05 January 2026 00:53:54 +0000 (0:00:01.914) 0:05:08.692 ******** 2026-01-05 00:55:47.047141 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.047145 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.047149 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.047153 | orchestrator | 2026-01-05 00:55:47.047156 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:55:47.047160 | orchestrator | Monday 05 January 2026 00:53:56 +0000 (0:00:02.625) 0:05:11.318 ******** 2026-01-05 00:55:47.047164 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.047168 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.047172 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.047175 | orchestrator | 2026-01-05 00:55:47.047179 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-05 00:55:47.047183 | orchestrator | Monday 05 January 2026 00:54:00 +0000 (0:00:03.393) 0:05:14.711 ******** 2026-01-05 00:55:47.047187 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.047190 | orchestrator | 2026-01-05 00:55:47.047194 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-05 00:55:47.047198 | orchestrator | Monday 05 January 2026 00:54:02 +0000 (0:00:01.661) 0:05:16.372 ******** 2026-01-05 00:55:47.047203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:55:47.047208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:55:47.047216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.047249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:55:47.047253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:55:47.047257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.047288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:55:47.047292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:55:47.047296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.047315 | orchestrator | 2026-01-05 00:55:47.047319 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-05 00:55:47.047323 | orchestrator | Monday 05 January 2026 00:54:06 +0000 (0:00:03.999) 0:05:20.372 ******** 2026-01-05 00:55:47.047327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:55:47.047343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:55:47.047348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.047366 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:55:47.047377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:55:47.047392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.047404 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:55:47.047420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:55:47.047424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:55:47.047446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:55:47.047450 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047469 | orchestrator | 2026-01-05 00:55:47.047477 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-05 00:55:47.047482 | orchestrator | Monday 05 January 2026 00:54:06 +0000 (0:00:00.884) 0:05:21.256 ******** 2026-01-05 00:55:47.047486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:55:47.047490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:55:47.047495 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:55:47.047506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:55:47.047510 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:55:47.047518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:55:47.047521 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047525 | orchestrator | 2026-01-05 00:55:47.047531 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-05 00:55:47.047535 | orchestrator | Monday 05 January 2026 00:54:08 +0000 (0:00:01.193) 0:05:22.449 ******** 2026-01-05 00:55:47.047539 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.047543 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.047547 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.047550 | orchestrator | 2026-01-05 00:55:47.047554 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-05 00:55:47.047558 | orchestrator | Monday 05 January 2026 00:54:09 +0000 (0:00:01.283) 0:05:23.733 ******** 2026-01-05 00:55:47.047562 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.047566 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.047569 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.047573 | orchestrator | 2026-01-05 00:55:47.047577 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-05 00:55:47.047580 | orchestrator | Monday 05 January 2026 00:54:11 +0000 (0:00:02.089) 0:05:25.823 ******** 2026-01-05 00:55:47.047584 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.047588 | orchestrator | 2026-01-05 00:55:47.047592 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-05 00:55:47.047596 | orchestrator | Monday 05 January 2026 00:54:12 +0000 (0:00:01.485) 0:05:27.309 ******** 2026-01-05 00:55:47.047616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.047623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.047631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.047639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:55:47.047656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:55:47.047662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:55:47.047671 | orchestrator | 2026-01-05 00:55:47.047675 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-05 00:55:47.047679 | orchestrator | Monday 05 January 2026 00:54:17 +0000 (0:00:04.665) 0:05:31.974 ******** 2026-01-05 00:55:47.047683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.047692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:55:47.047696 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.047717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:55:47.047726 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.047737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:55:47.047741 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047745 | orchestrator | 2026-01-05 00:55:47.047749 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-05 00:55:47.047752 | orchestrator | Monday 05 January 2026 00:54:18 +0000 (0:00:00.590) 0:05:32.565 ******** 2026-01-05 00:55:47.047756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.047773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-05 00:55:47.047782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-05 00:55:47.047786 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.047794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-05 00:55:47.047798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-05 00:55:47.047801 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.047809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-05 00:55:47.047813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-05 00:55:47.047817 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047820 | orchestrator | 2026-01-05 00:55:47.047827 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-05 00:55:47.047832 | orchestrator | Monday 05 January 2026 00:54:19 +0000 (0:00:01.305) 0:05:33.870 ******** 2026-01-05 00:55:47.047838 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047845 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047852 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047855 | orchestrator | 2026-01-05 00:55:47.047859 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-05 00:55:47.047863 | orchestrator | Monday 05 January 2026 00:54:20 +0000 (0:00:00.491) 0:05:34.362 ******** 2026-01-05 00:55:47.047867 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.047871 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.047874 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.047878 | orchestrator | 2026-01-05 00:55:47.047882 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-05 00:55:47.047885 | orchestrator | Monday 05 January 2026 00:54:21 +0000 (0:00:01.394) 0:05:35.756 ******** 2026-01-05 00:55:47.047889 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.047893 | orchestrator | 2026-01-05 00:55:47.047897 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-05 00:55:47.047901 | orchestrator | Monday 05 January 2026 00:54:23 +0000 (0:00:01.762) 0:05:37.518 ******** 2026-01-05 00:55:47.047921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-05 00:55:47.047930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:55:47.047934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.047938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.047942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.047950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-05 00:55:47.047959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:55:47.047977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.047981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.047985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.047992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-05 00:55:47.047996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:55:47.048004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.048045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-05 00:55:47.048052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.048102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-05 00:55:47.048108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:55:47.048130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-05 00:55:47.048167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048196 | orchestrator | 2026-01-05 00:55:47.048202 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-05 00:55:47.048209 | orchestrator | Monday 05 January 2026 00:54:27 +0000 (0:00:04.288) 0:05:41.807 ******** 2026-01-05 00:55:47.048230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-05 00:55:47.048235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:55:47.048240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.048265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-05 00:55:47.048279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048293 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.048305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-05 00:55:47.048318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:55:47.048325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.048362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-05 00:55:47.048377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-05 00:55:47.048400 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.048404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:55:47.048408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:55:47.048434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-05 00:55:47.048438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:55:47.048452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:55:47.048498 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.048502 | orchestrator | 2026-01-05 00:55:47.048506 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-05 00:55:47.048510 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:00.976) 0:05:42.784 ******** 2026-01-05 00:55:47.048515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-05 00:55:47.048521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-05 00:55:47.048530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.048541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.048548 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.048554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-05 00:55:47.048561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-05 00:55:47.048568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.048580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-05 00:55:47.048587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-05 00:55:47.048594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.048601 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.048612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.048619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-05 00:55:47.048625 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.048632 | orchestrator | 2026-01-05 00:55:47.048638 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-05 00:55:47.048644 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:00.903) 0:05:43.687 ******** 2026-01-05 00:55:47.048650 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.048657 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.048663 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.048669 | orchestrator | 2026-01-05 00:55:47.048675 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-05 00:55:47.048681 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.870) 0:05:44.558 ******** 2026-01-05 00:55:47.048687 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.048694 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.048700 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.048706 | orchestrator | 2026-01-05 00:55:47.048713 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-05 00:55:47.048719 | orchestrator | Monday 05 January 2026 00:54:31 +0000 (0:00:01.424) 0:05:45.982 ******** 2026-01-05 00:55:47.048725 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.048731 | orchestrator | 2026-01-05 00:55:47.048738 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-05 00:55:47.048751 | orchestrator | Monday 05 January 2026 00:54:33 +0000 (0:00:01.433) 0:05:47.415 ******** 2026-01-05 00:55:47.048759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:55:47.048776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:55:47.048788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:55:47.048796 | orchestrator | 2026-01-05 00:55:47.048803 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-05 00:55:47.048810 | orchestrator | Monday 05 January 2026 00:54:35 +0000 (0:00:02.891) 0:05:50.307 ******** 2026-01-05 00:55:47.048816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:55:47.048827 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.048833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:55:47.048846 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.048853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:55:47.048860 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.048867 | orchestrator | 2026-01-05 00:55:47.048873 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-05 00:55:47.048880 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:00.815) 0:05:51.122 ******** 2026-01-05 00:55:47.048887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:55:47.048894 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.048903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:55:47.048907 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.048911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:55:47.048915 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.048919 | orchestrator | 2026-01-05 00:55:47.048922 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-05 00:55:47.048926 | orchestrator | Monday 05 January 2026 00:54:37 +0000 (0:00:00.674) 0:05:51.797 ******** 2026-01-05 00:55:47.048930 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.048933 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.048937 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.048941 | orchestrator | 2026-01-05 00:55:47.048945 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-05 00:55:47.048948 | orchestrator | Monday 05 January 2026 00:54:37 +0000 (0:00:00.470) 0:05:52.267 ******** 2026-01-05 00:55:47.048952 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.048956 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.048959 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.048963 | orchestrator | 2026-01-05 00:55:47.048967 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-05 00:55:47.048970 | orchestrator | Monday 05 January 2026 00:54:39 +0000 (0:00:01.572) 0:05:53.840 ******** 2026-01-05 00:55:47.048979 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.048983 | orchestrator | 2026-01-05 00:55:47.048986 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-05 00:55:47.048990 | orchestrator | Monday 05 January 2026 00:54:41 +0000 (0:00:01.896) 0:05:55.737 ******** 2026-01-05 00:55:47.048998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-05 00:55:47.049003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-05 00:55:47.049012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-05 00:55:47.049016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-05 00:55:47.049028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-05 00:55:47.049035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-05 00:55:47.049061 | orchestrator | 2026-01-05 00:55:47.049072 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-05 00:55:47.049079 | orchestrator | Monday 05 January 2026 00:54:48 +0000 (0:00:06.871) 0:06:02.608 ******** 2026-01-05 00:55:47.049089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-05 00:55:47.049096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-05 00:55:47.049111 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.049123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-05 00:55:47.049130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-05 00:55:47.049136 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.049149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-05 00:55:47.049157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-05 00:55:47.049162 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.049165 | orchestrator | 2026-01-05 00:55:47.049171 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-05 00:55:47.049175 | orchestrator | Monday 05 January 2026 00:54:49 +0000 (0:00:01.060) 0:06:03.669 ******** 2026-01-05 00:55:47.049180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-05 00:55:47.049184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-05 00:55:47.049189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.049193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.049197 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.049201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-05 00:55:47.049205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-05 00:55:47.049209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.049212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.049216 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.049223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-05 00:55:47.049231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-05 00:55:47.049235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.049239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-05 00:55:47.049243 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.049247 | orchestrator | 2026-01-05 00:55:47.049251 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-05 00:55:47.049254 | orchestrator | Monday 05 January 2026 00:54:50 +0000 (0:00:01.438) 0:06:05.107 ******** 2026-01-05 00:55:47.049258 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.049262 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.049266 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.049269 | orchestrator | 2026-01-05 00:55:47.049273 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-05 00:55:47.049277 | orchestrator | Monday 05 January 2026 00:54:52 +0000 (0:00:01.284) 0:06:06.392 ******** 2026-01-05 00:55:47.049281 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.049284 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.049290 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.049294 | orchestrator | 2026-01-05 00:55:47.049298 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-05 00:55:47.049302 | orchestrator | Monday 05 January 2026 00:54:54 +0000 (0:00:02.268) 0:06:08.660 ******** 2026-01-05 00:55:47.049306 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.049309 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.049313 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.049317 | orchestrator | 2026-01-05 00:55:47.049321 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-05 00:55:47.049325 | orchestrator | Monday 05 January 2026 00:54:54 +0000 (0:00:00.342) 0:06:09.004 ******** 2026-01-05 00:55:47.049328 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.049332 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.049336 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.049339 | orchestrator | 2026-01-05 00:55:47.049343 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-05 00:55:47.049347 | orchestrator | Monday 05 January 2026 00:54:55 +0000 (0:00:00.687) 0:06:09.691 ******** 2026-01-05 00:55:47.049351 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.049354 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.049358 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.049362 | orchestrator | 2026-01-05 00:55:47.049366 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-05 00:55:47.049369 | orchestrator | Monday 05 January 2026 00:54:55 +0000 (0:00:00.327) 0:06:10.018 ******** 2026-01-05 00:55:47.049373 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.049377 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.049381 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.049384 | orchestrator | 2026-01-05 00:55:47.049388 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-05 00:55:47.049396 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:00.334) 0:06:10.353 ******** 2026-01-05 00:55:47.049400 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.049404 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.049407 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.049411 | orchestrator | 2026-01-05 00:55:47.049415 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-01-05 00:55:47.049419 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:00.347) 0:06:10.701 ******** 2026-01-05 00:55:47.049423 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:47.049426 | orchestrator | 2026-01-05 00:55:47.049430 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-05 00:55:47.049434 | orchestrator | Monday 05 January 2026 00:54:58 +0000 (0:00:01.871) 0:06:12.572 ******** 2026-01-05 00:55:47.049438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.049445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.049450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:55:47.049498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.049504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.049512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:55:47.049516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.049521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.049525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:55:47.049529 | orchestrator | 2026-01-05 00:55:47.049533 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-05 00:55:47.049537 | orchestrator | Monday 05 January 2026 00:55:00 +0000 (0:00:02.591) 0:06:15.164 ******** 2026-01-05 00:55:47.049541 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:55:47.049545 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:55:47.049548 | orchestrator | } 2026-01-05 00:55:47.049552 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:55:47.049556 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:55:47.049560 | orchestrator | } 2026-01-05 00:55:47.049563 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:55:47.049567 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:55:47.049571 | orchestrator | } 2026-01-05 00:55:47.049574 | orchestrator | 2026-01-05 00:55:47.049578 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:55:47.049582 | orchestrator | Monday 05 January 2026 00:55:01 +0000 (0:00:00.441) 0:06:15.605 ******** 2026-01-05 00:55:47.049589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.049596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.049600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.049604 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.049661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.049682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.049687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:55:47.049691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.049695 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.049702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:55:47.049710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:55:47.049714 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.049718 | orchestrator | 2026-01-05 00:55:47.049722 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-05 00:55:47.049726 | orchestrator | Monday 05 January 2026 00:55:03 +0000 (0:00:01.810) 0:06:17.416 ******** 2026-01-05 00:55:47.049729 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.049733 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.049737 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.049741 | orchestrator | 2026-01-05 00:55:47.049745 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-05 00:55:47.049748 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:01.238) 0:06:18.654 ******** 2026-01-05 00:55:47.049752 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.049756 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.049760 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.049763 | orchestrator | 2026-01-05 00:55:47.049767 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-05 00:55:47.049771 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:00.403) 0:06:19.058 ******** 2026-01-05 00:55:47.049775 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.049778 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.049782 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.049786 | orchestrator | 2026-01-05 00:55:47.049790 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-05 00:55:47.049793 | orchestrator | Monday 05 January 2026 00:55:05 +0000 (0:00:01.067) 0:06:20.125 ******** 2026-01-05 00:55:47.049797 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.049801 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.049805 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.049808 | orchestrator | 2026-01-05 00:55:47.049812 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-05 00:55:47.049816 | orchestrator | Monday 05 January 2026 00:55:06 +0000 (0:00:00.989) 0:06:21.115 ******** 2026-01-05 00:55:47.049820 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.049826 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.049832 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.049842 | orchestrator | 2026-01-05 00:55:47.049848 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-05 00:55:47.049854 | orchestrator | Monday 05 January 2026 00:55:08 +0000 (0:00:01.419) 0:06:22.534 ******** 2026-01-05 00:55:47.049860 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.049870 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.049876 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.049882 | orchestrator | 2026-01-05 00:55:47.049888 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-05 00:55:47.049893 | orchestrator | Monday 05 January 2026 00:55:18 +0000 (0:00:10.420) 0:06:32.954 ******** 2026-01-05 00:55:47.049899 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.049905 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.049911 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.049922 | orchestrator | 2026-01-05 00:55:47.049928 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-05 00:55:47.049934 | orchestrator | Monday 05 January 2026 00:55:19 +0000 (0:00:00.850) 0:06:33.805 ******** 2026-01-05 00:55:47.049940 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.049946 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.049952 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.049958 | orchestrator | 2026-01-05 00:55:47.049964 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-05 00:55:47.049971 | orchestrator | Monday 05 January 2026 00:55:28 +0000 (0:00:09.427) 0:06:43.232 ******** 2026-01-05 00:55:47.049977 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.049983 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.049989 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.049995 | orchestrator | 2026-01-05 00:55:47.050001 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-05 00:55:47.050007 | orchestrator | Monday 05 January 2026 00:55:33 +0000 (0:00:05.035) 0:06:48.268 ******** 2026-01-05 00:55:47.050049 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:47.050058 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:47.050064 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:47.050070 | orchestrator | 2026-01-05 00:55:47.050073 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-05 00:55:47.050078 | orchestrator | Monday 05 January 2026 00:55:38 +0000 (0:00:04.343) 0:06:52.611 ******** 2026-01-05 00:55:47.050082 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.050085 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.050089 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.050093 | orchestrator | 2026-01-05 00:55:47.050097 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-05 00:55:47.050100 | orchestrator | Monday 05 January 2026 00:55:38 +0000 (0:00:00.381) 0:06:52.992 ******** 2026-01-05 00:55:47.050104 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.050113 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.050117 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.050120 | orchestrator | 2026-01-05 00:55:47.050124 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-05 00:55:47.050128 | orchestrator | Monday 05 January 2026 00:55:39 +0000 (0:00:00.362) 0:06:53.355 ******** 2026-01-05 00:55:47.050132 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.050135 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.050139 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.050143 | orchestrator | 2026-01-05 00:55:47.050146 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-05 00:55:47.050150 | orchestrator | Monday 05 January 2026 00:55:39 +0000 (0:00:00.677) 0:06:54.032 ******** 2026-01-05 00:55:47.050154 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.050158 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.050161 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.050165 | orchestrator | 2026-01-05 00:55:47.050169 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-05 00:55:47.050172 | orchestrator | Monday 05 January 2026 00:55:40 +0000 (0:00:00.369) 0:06:54.402 ******** 2026-01-05 00:55:47.050176 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.050180 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.050183 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.050187 | orchestrator | 2026-01-05 00:55:47.050191 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-05 00:55:47.050194 | orchestrator | Monday 05 January 2026 00:55:40 +0000 (0:00:00.453) 0:06:54.857 ******** 2026-01-05 00:55:47.050198 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:47.050202 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:47.050206 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:47.050209 | orchestrator | 2026-01-05 00:55:47.050218 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-05 00:55:47.050222 | orchestrator | Monday 05 January 2026 00:55:40 +0000 (0:00:00.366) 0:06:55.223 ******** 2026-01-05 00:55:47.050226 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.050229 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.050233 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.050237 | orchestrator | 2026-01-05 00:55:47.050240 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-05 00:55:47.050244 | orchestrator | Monday 05 January 2026 00:55:45 +0000 (0:00:04.125) 0:06:59.348 ******** 2026-01-05 00:55:47.050248 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:55:47.050252 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:55:47.050255 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:55:47.050259 | orchestrator | 2026-01-05 00:55:47.050263 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:55:47.050267 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-05 00:55:47.050272 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-05 00:55:47.050276 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-05 00:55:47.050279 | orchestrator | 2026-01-05 00:55:47.050283 | orchestrator | 2026-01-05 00:55:47.050287 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:55:47.050290 | orchestrator | Monday 05 January 2026 00:55:45 +0000 (0:00:00.884) 0:07:00.233 ******** 2026-01-05 00:55:47.050297 | orchestrator | =============================================================================== 2026-01-05 00:55:47.050301 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.42s 2026-01-05 00:55:47.050305 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.43s 2026-01-05 00:55:47.050309 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.87s 2026-01-05 00:55:47.050312 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.52s 2026-01-05 00:55:47.050316 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 6.35s 2026-01-05 00:55:47.050320 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.66s 2026-01-05 00:55:47.050323 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.48s 2026-01-05 00:55:47.050327 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.34s 2026-01-05 00:55:47.050331 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.06s 2026-01-05 00:55:47.050334 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 5.04s 2026-01-05 00:55:47.050338 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.83s 2026-01-05 00:55:47.050342 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.78s 2026-01-05 00:55:47.050346 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.67s 2026-01-05 00:55:47.050349 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.45s 2026-01-05 00:55:47.050353 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.34s 2026-01-05 00:55:47.050357 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.31s 2026-01-05 00:55:47.050361 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.29s 2026-01-05 00:55:47.050364 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.29s 2026-01-05 00:55:47.050368 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.25s 2026-01-05 00:55:47.050372 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.20s 2026-01-05 00:55:47.050381 | orchestrator | 2026-01-05 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:50.088963 | orchestrator | 2026-01-05 00:55:50 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:55:50.091926 | orchestrator | 2026-01-05 00:55:50 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:55:50.093394 | orchestrator | 2026-01-05 00:55:50 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:50.093761 | orchestrator | 2026-01-05 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:53.140761 | orchestrator | 2026-01-05 00:55:53 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:55:53.143327 | orchestrator | 2026-01-05 00:55:53 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:55:53.144221 | orchestrator | 2026-01-05 00:55:53 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:53.144277 | orchestrator | 2026-01-05 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:56.180410 | orchestrator | 2026-01-05 00:55:56 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:55:56.181329 | orchestrator | 2026-01-05 00:55:56 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:55:56.182686 | orchestrator | 2026-01-05 00:55:56 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:56.184176 | orchestrator | 2026-01-05 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:59.224965 | orchestrator | 2026-01-05 00:55:59 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:55:59.227829 | orchestrator | 2026-01-05 00:55:59 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:55:59.227871 | orchestrator | 2026-01-05 00:55:59 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:55:59.227877 | orchestrator | 2026-01-05 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:02.259851 | orchestrator | 2026-01-05 00:56:02 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:02.260950 | orchestrator | 2026-01-05 00:56:02 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:02.261646 | orchestrator | 2026-01-05 00:56:02 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:02.261680 | orchestrator | 2026-01-05 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:05.305606 | orchestrator | 2026-01-05 00:56:05 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:05.305756 | orchestrator | 2026-01-05 00:56:05 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:05.306610 | orchestrator | 2026-01-05 00:56:05 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:05.306691 | orchestrator | 2026-01-05 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:08.363591 | orchestrator | 2026-01-05 00:56:08 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:08.365288 | orchestrator | 2026-01-05 00:56:08 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:08.365351 | orchestrator | 2026-01-05 00:56:08 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:08.365362 | orchestrator | 2026-01-05 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:11.404094 | orchestrator | 2026-01-05 00:56:11 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:11.404957 | orchestrator | 2026-01-05 00:56:11 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:11.411562 | orchestrator | 2026-01-05 00:56:11 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:11.411630 | orchestrator | 2026-01-05 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:14.453257 | orchestrator | 2026-01-05 00:56:14 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:14.453384 | orchestrator | 2026-01-05 00:56:14 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:14.453401 | orchestrator | 2026-01-05 00:56:14 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:14.453465 | orchestrator | 2026-01-05 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:17.516371 | orchestrator | 2026-01-05 00:56:17 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:17.516562 | orchestrator | 2026-01-05 00:56:17 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:17.517341 | orchestrator | 2026-01-05 00:56:17 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:17.517376 | orchestrator | 2026-01-05 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:20.549154 | orchestrator | 2026-01-05 00:56:20 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:20.549843 | orchestrator | 2026-01-05 00:56:20 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:20.550333 | orchestrator | 2026-01-05 00:56:20 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:20.550359 | orchestrator | 2026-01-05 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:23.603912 | orchestrator | 2026-01-05 00:56:23 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:23.604705 | orchestrator | 2026-01-05 00:56:23 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:23.605747 | orchestrator | 2026-01-05 00:56:23 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:23.605783 | orchestrator | 2026-01-05 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:26.654372 | orchestrator | 2026-01-05 00:56:26 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:26.655880 | orchestrator | 2026-01-05 00:56:26 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:26.657540 | orchestrator | 2026-01-05 00:56:26 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:26.657569 | orchestrator | 2026-01-05 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:29.713802 | orchestrator | 2026-01-05 00:56:29 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:29.715065 | orchestrator | 2026-01-05 00:56:29 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:29.716757 | orchestrator | 2026-01-05 00:56:29 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:29.717357 | orchestrator | 2026-01-05 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:32.765605 | orchestrator | 2026-01-05 00:56:32 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:32.767124 | orchestrator | 2026-01-05 00:56:32 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:32.768826 | orchestrator | 2026-01-05 00:56:32 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:32.768871 | orchestrator | 2026-01-05 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:35.842674 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:35.842889 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:35.846395 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:35.848071 | orchestrator | 2026-01-05 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:38.901059 | orchestrator | 2026-01-05 00:56:38 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:38.903742 | orchestrator | 2026-01-05 00:56:38 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:38.906279 | orchestrator | 2026-01-05 00:56:38 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:38.906508 | orchestrator | 2026-01-05 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:41.954978 | orchestrator | 2026-01-05 00:56:41 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:41.959133 | orchestrator | 2026-01-05 00:56:41 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:41.961526 | orchestrator | 2026-01-05 00:56:41 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:41.961582 | orchestrator | 2026-01-05 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:45.012692 | orchestrator | 2026-01-05 00:56:45 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:45.012758 | orchestrator | 2026-01-05 00:56:45 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:45.013904 | orchestrator | 2026-01-05 00:56:45 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:45.013925 | orchestrator | 2026-01-05 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:48.060826 | orchestrator | 2026-01-05 00:56:48 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:48.061501 | orchestrator | 2026-01-05 00:56:48 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:48.063323 | orchestrator | 2026-01-05 00:56:48 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:48.063407 | orchestrator | 2026-01-05 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:51.106535 | orchestrator | 2026-01-05 00:56:51 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:51.108830 | orchestrator | 2026-01-05 00:56:51 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:51.110424 | orchestrator | 2026-01-05 00:56:51 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:51.110456 | orchestrator | 2026-01-05 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:54.154392 | orchestrator | 2026-01-05 00:56:54 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:54.156442 | orchestrator | 2026-01-05 00:56:54 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:54.159128 | orchestrator | 2026-01-05 00:56:54 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:54.159191 | orchestrator | 2026-01-05 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:57.210591 | orchestrator | 2026-01-05 00:56:57 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:56:57.213433 | orchestrator | 2026-01-05 00:56:57 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:56:57.215898 | orchestrator | 2026-01-05 00:56:57 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:56:57.216100 | orchestrator | 2026-01-05 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:00.261252 | orchestrator | 2026-01-05 00:57:00 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:00.262068 | orchestrator | 2026-01-05 00:57:00 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:00.263621 | orchestrator | 2026-01-05 00:57:00 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:57:00.263668 | orchestrator | 2026-01-05 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:03.326067 | orchestrator | 2026-01-05 00:57:03 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:03.328835 | orchestrator | 2026-01-05 00:57:03 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:03.332615 | orchestrator | 2026-01-05 00:57:03 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:57:03.332680 | orchestrator | 2026-01-05 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:06.382396 | orchestrator | 2026-01-05 00:57:06 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:06.383565 | orchestrator | 2026-01-05 00:57:06 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:06.384724 | orchestrator | 2026-01-05 00:57:06 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:57:06.384777 | orchestrator | 2026-01-05 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:09.438981 | orchestrator | 2026-01-05 00:57:09 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:09.441262 | orchestrator | 2026-01-05 00:57:09 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:09.443279 | orchestrator | 2026-01-05 00:57:09 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:57:09.443371 | orchestrator | 2026-01-05 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:12.483186 | orchestrator | 2026-01-05 00:57:12 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:12.485121 | orchestrator | 2026-01-05 00:57:12 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:12.486471 | orchestrator | 2026-01-05 00:57:12 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:57:12.486534 | orchestrator | 2026-01-05 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:15.535423 | orchestrator | 2026-01-05 00:57:15 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:15.535603 | orchestrator | 2026-01-05 00:57:15 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:15.537015 | orchestrator | 2026-01-05 00:57:15 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:57:15.537060 | orchestrator | 2026-01-05 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:18.586625 | orchestrator | 2026-01-05 00:57:18 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:18.587932 | orchestrator | 2026-01-05 00:57:18 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:18.591090 | orchestrator | 2026-01-05 00:57:18 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state STARTED 2026-01-05 00:57:18.591149 | orchestrator | 2026-01-05 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:21.645724 | orchestrator | 2026-01-05 00:57:21 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:21.646143 | orchestrator | 2026-01-05 00:57:21 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:21.653526 | orchestrator | 2026-01-05 00:57:21 | INFO  | Task a8a960ae-b858-49a6-a145-9ae526aca3ae is in state SUCCESS 2026-01-05 00:57:21.653707 | orchestrator | 2026-01-05 00:57:21.655885 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:57:21.656149 | orchestrator | 2.16.14 2026-01-05 00:57:21.656170 | orchestrator | 2026-01-05 00:57:21.656181 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-05 00:57:21.656192 | orchestrator | 2026-01-05 00:57:21.656202 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-05 00:57:21.656212 | orchestrator | Monday 05 January 2026 00:46:03 +0000 (0:00:00.995) 0:00:00.995 ******** 2026-01-05 00:57:21.656222 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.656233 | orchestrator | 2026-01-05 00:57:21.656243 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-05 00:57:21.656265 | orchestrator | Monday 05 January 2026 00:46:04 +0000 (0:00:01.249) 0:00:02.245 ******** 2026-01-05 00:57:21.656345 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.656355 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.656364 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.656373 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.656383 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.656393 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.656404 | orchestrator | 2026-01-05 00:57:21.656414 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-05 00:57:21.656424 | orchestrator | Monday 05 January 2026 00:46:06 +0000 (0:00:01.469) 0:00:03.714 ******** 2026-01-05 00:57:21.656434 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.656444 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.656454 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.656465 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.656476 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.656486 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.656821 | orchestrator | 2026-01-05 00:57:21.656836 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-05 00:57:21.656846 | orchestrator | Monday 05 January 2026 00:46:06 +0000 (0:00:00.884) 0:00:04.599 ******** 2026-01-05 00:57:21.656855 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.656865 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.656874 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.656884 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.656894 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.656905 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.656916 | orchestrator | 2026-01-05 00:57:21.656926 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-05 00:57:21.656959 | orchestrator | Monday 05 January 2026 00:46:08 +0000 (0:00:01.179) 0:00:05.778 ******** 2026-01-05 00:57:21.656969 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.657355 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.657451 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.657462 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.657471 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.657480 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.657490 | orchestrator | 2026-01-05 00:57:21.657499 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-05 00:57:21.657508 | orchestrator | Monday 05 January 2026 00:46:08 +0000 (0:00:00.790) 0:00:06.569 ******** 2026-01-05 00:57:21.657518 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.657528 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.657537 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.657546 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.657557 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.657566 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.657576 | orchestrator | 2026-01-05 00:57:21.657586 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-05 00:57:21.657596 | orchestrator | Monday 05 January 2026 00:46:09 +0000 (0:00:00.736) 0:00:07.306 ******** 2026-01-05 00:57:21.657605 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.657616 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.657625 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.657635 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.657645 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.657655 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.657664 | orchestrator | 2026-01-05 00:57:21.657674 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-05 00:57:21.657685 | orchestrator | Monday 05 January 2026 00:46:10 +0000 (0:00:00.868) 0:00:08.174 ******** 2026-01-05 00:57:21.657694 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.657704 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.657714 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.657723 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.657733 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.657739 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.657745 | orchestrator | 2026-01-05 00:57:21.657751 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-05 00:57:21.657757 | orchestrator | Monday 05 January 2026 00:46:11 +0000 (0:00:01.100) 0:00:09.275 ******** 2026-01-05 00:57:21.657763 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.657768 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.657774 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.657780 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.657786 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.657791 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.657797 | orchestrator | 2026-01-05 00:57:21.657803 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-05 00:57:21.657809 | orchestrator | Monday 05 January 2026 00:46:12 +0000 (0:00:01.330) 0:00:10.606 ******** 2026-01-05 00:57:21.657814 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:57:21.657820 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:57:21.657826 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:57:21.657877 | orchestrator | 2026-01-05 00:57:21.657884 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-05 00:57:21.657890 | orchestrator | Monday 05 January 2026 00:46:13 +0000 (0:00:00.661) 0:00:11.267 ******** 2026-01-05 00:57:21.657895 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.657901 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.657907 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.658184 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.658209 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.658216 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.658223 | orchestrator | 2026-01-05 00:57:21.658229 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-05 00:57:21.658235 | orchestrator | Monday 05 January 2026 00:46:15 +0000 (0:00:01.546) 0:00:12.814 ******** 2026-01-05 00:57:21.658241 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:57:21.658247 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:57:21.658253 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:57:21.658259 | orchestrator | 2026-01-05 00:57:21.658265 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-05 00:57:21.658298 | orchestrator | Monday 05 January 2026 00:46:18 +0000 (0:00:03.703) 0:00:16.518 ******** 2026-01-05 00:57:21.658306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:57:21.658312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:57:21.658388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:57:21.658399 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.658406 | orchestrator | 2026-01-05 00:57:21.658414 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-05 00:57:21.658422 | orchestrator | Monday 05 January 2026 00:46:19 +0000 (0:00:00.975) 0:00:17.494 ******** 2026-01-05 00:57:21.658433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658462 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.658471 | orchestrator | 2026-01-05 00:57:21.658480 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-05 00:57:21.658487 | orchestrator | Monday 05 January 2026 00:46:20 +0000 (0:00:00.971) 0:00:18.465 ******** 2026-01-05 00:57:21.658493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658530 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.658539 | orchestrator | 2026-01-05 00:57:21.658548 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-05 00:57:21.658556 | orchestrator | Monday 05 January 2026 00:46:21 +0000 (0:00:00.660) 0:00:19.125 ******** 2026-01-05 00:57:21.658599 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-05 00:46:15.864641', 'end': '2026-01-05 00:46:16.142853', 'delta': '0:00:00.278212', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-05 00:46:16.865272', 'end': '2026-01-05 00:46:17.231983', 'delta': '0:00:00.366711', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-05 00:46:18.111044', 'end': '2026-01-05 00:46:18.451973', 'delta': '0:00:00.340929', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.658625 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.658631 | orchestrator | 2026-01-05 00:57:21.658636 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-05 00:57:21.658641 | orchestrator | Monday 05 January 2026 00:46:21 +0000 (0:00:00.439) 0:00:19.565 ******** 2026-01-05 00:57:21.658646 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.658651 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.658656 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.658661 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.658666 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.658671 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.658676 | orchestrator | 2026-01-05 00:57:21.658685 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-05 00:57:21.658696 | orchestrator | Monday 05 January 2026 00:46:24 +0000 (0:00:02.758) 0:00:22.323 ******** 2026-01-05 00:57:21.658709 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.658718 | orchestrator | 2026-01-05 00:57:21.658727 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-05 00:57:21.658736 | orchestrator | Monday 05 January 2026 00:46:25 +0000 (0:00:00.906) 0:00:23.230 ******** 2026-01-05 00:57:21.658745 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.658856 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.658865 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.658882 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.658891 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659076 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659086 | orchestrator | 2026-01-05 00:57:21.659091 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-05 00:57:21.659096 | orchestrator | Monday 05 January 2026 00:46:26 +0000 (0:00:01.264) 0:00:24.495 ******** 2026-01-05 00:57:21.659104 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659113 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659122 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659131 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659140 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659145 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659150 | orchestrator | 2026-01-05 00:57:21.659155 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 00:57:21.659160 | orchestrator | Monday 05 January 2026 00:46:28 +0000 (0:00:01.284) 0:00:25.779 ******** 2026-01-05 00:57:21.659169 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659178 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659187 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659283 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659294 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659299 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659304 | orchestrator | 2026-01-05 00:57:21.659309 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-05 00:57:21.659314 | orchestrator | Monday 05 January 2026 00:46:29 +0000 (0:00:01.064) 0:00:26.844 ******** 2026-01-05 00:57:21.659319 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659324 | orchestrator | 2026-01-05 00:57:21.659329 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-05 00:57:21.659334 | orchestrator | Monday 05 January 2026 00:46:29 +0000 (0:00:00.142) 0:00:26.986 ******** 2026-01-05 00:57:21.659339 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659344 | orchestrator | 2026-01-05 00:57:21.659350 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 00:57:21.659355 | orchestrator | Monday 05 January 2026 00:46:29 +0000 (0:00:00.226) 0:00:27.212 ******** 2026-01-05 00:57:21.659360 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659365 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659370 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659398 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659404 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659410 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659415 | orchestrator | 2026-01-05 00:57:21.659420 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-05 00:57:21.659425 | orchestrator | Monday 05 January 2026 00:46:30 +0000 (0:00:00.994) 0:00:28.207 ******** 2026-01-05 00:57:21.659430 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659436 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659441 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659446 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659451 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659456 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659461 | orchestrator | 2026-01-05 00:57:21.659466 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-05 00:57:21.659477 | orchestrator | Monday 05 January 2026 00:46:31 +0000 (0:00:01.110) 0:00:29.318 ******** 2026-01-05 00:57:21.659482 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659487 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659493 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659498 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659503 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659508 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659519 | orchestrator | 2026-01-05 00:57:21.659525 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-05 00:57:21.659530 | orchestrator | Monday 05 January 2026 00:46:32 +0000 (0:00:00.750) 0:00:30.069 ******** 2026-01-05 00:57:21.659535 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659540 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659545 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659550 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659555 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659560 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659565 | orchestrator | 2026-01-05 00:57:21.659571 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-05 00:57:21.659576 | orchestrator | Monday 05 January 2026 00:46:33 +0000 (0:00:00.907) 0:00:30.976 ******** 2026-01-05 00:57:21.659581 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659586 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659591 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659596 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659602 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659607 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659612 | orchestrator | 2026-01-05 00:57:21.659617 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-05 00:57:21.659622 | orchestrator | Monday 05 January 2026 00:46:34 +0000 (0:00:01.071) 0:00:32.047 ******** 2026-01-05 00:57:21.659766 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659771 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659776 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659785 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659811 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659820 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659828 | orchestrator | 2026-01-05 00:57:21.659836 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-05 00:57:21.659845 | orchestrator | Monday 05 January 2026 00:46:35 +0000 (0:00:00.843) 0:00:32.890 ******** 2026-01-05 00:57:21.659853 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.659861 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.659870 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.659878 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.659888 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.659897 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.659907 | orchestrator | 2026-01-05 00:57:21.659917 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-05 00:57:21.659924 | orchestrator | Monday 05 January 2026 00:46:36 +0000 (0:00:00.872) 0:00:33.763 ******** 2026-01-05 00:57:21.659930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c0354e6--1633--54b4--ae3c--130b25b2cb6c-osd--block--3c0354e6--1633--54b4--ae3c--130b25b2cb6c', 'dm-uuid-LVM-1bNQGFvidc8nrkpPsYOdfDFIHYrFQGDVNYadW2DrsZLcIpedVCQWvwA5S76TEs8y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.659937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0807b7d--156a--51e9--a1ef--1ae613918df1-osd--block--a0807b7d--156a--51e9--a1ef--1ae613918df1', 'dm-uuid-LVM-tSTwV2iW4LjCTxZWgGeenUx76m7JqjXYZ2zcK3DpzSyV2V5mf7hYpMGRwS0AMSC3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4-osd--block--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4', 'dm-uuid-LVM-50FqfdBqQUcdg50l79EdQUKZYdvcLXdCBREro9BdYBXi8HBxSPBWBHTTiHZOlj0n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7959794c--cc9c--59d9--9b66--2faefa464ed4-osd--block--7959794c--cc9c--59d9--9b66--2faefa464ed4', 'dm-uuid-LVM-UK400EAe7rHq5oqlR20ULRKqz452MV7xjqxw9yWYFtxLz3ceifB7e1DtGshHklZH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3c0354e6--1633--54b4--ae3c--130b25b2cb6c-osd--block--3c0354e6--1633--54b4--ae3c--130b25b2cb6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wCF7O2-hVZ1-bGfi-mfQ0-cosE-pNsg-J8TXhC', 'scsi-0QEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20', 'scsi-SQEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a0807b7d--156a--51e9--a1ef--1ae613918df1-osd--block--a0807b7d--156a--51e9--a1ef--1ae613918df1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-R41xO0-GZQJ-UybA-uIr2-kVN3-mGCm-2mGCPc', 'scsi-0QEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2', 'scsi-SQEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8', 'scsi-SQEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660366 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.660376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part1', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part14', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part15', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part16', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4-osd--block--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mjpc8L-T1wk-iEqY-Gz09-7Sdg-NV65-N5qSru', 'scsi-0QEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1', 'scsi-SQEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7959794c--cc9c--59d9--9b66--2faefa464ed4-osd--block--7959794c--cc9c--59d9--9b66--2faefa464ed4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HAdYPE-BZe5-AHeR-h3Wm-6WWQ-jTlU-5natTC', 'scsi-0QEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613', 'scsi-SQEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763', 'scsi-SQEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1631feb6--d96c--5a43--89dd--a558edd73d68-osd--block--1631feb6--d96c--5a43--89dd--a558edd73d68', 'dm-uuid-LVM-WKe72EsaKn2scALuG4mViXkQfzDrjkxxfQ3uecey7fXGvXJLFsqzCQ4cHhvUZlrO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c322448e--6042--58d0--bdfa--5021630018c9-osd--block--c322448e--6042--58d0--bdfa--5021630018c9', 'dm-uuid-LVM-CtcQh9shLksalAwi1IDQOa7qdl8NvgvDeTgTlyxw1rg0IE7jAYC9SABwh2bAeub6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660775 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.660781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part1', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part14', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part15', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part16', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1631feb6--d96c--5a43--89dd--a558edd73d68-osd--block--1631feb6--d96c--5a43--89dd--a558edd73d68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yyCkfM-Sg2M-Ic4d-BhS3-Esz0-MnBo-U5zR0u', 'scsi-0QEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421', 'scsi-SQEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c322448e--6042--58d0--bdfa--5021630018c9-osd--block--c322448e--6042--58d0--bdfa--5021630018c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FdWrmS-YAjp-4MMF-PHJE-vNPl-hPln-goRiyk', 'scsi-0QEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678', 'scsi-SQEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a', 'scsi-SQEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.660954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part1', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part14', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part15', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part16', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.660997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.661005 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.661014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.661097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.661170 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.661178 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.661478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:57:21.661654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part1', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part14', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part15', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part16', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.661741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:57:21.661750 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.661755 | orchestrator | 2026-01-05 00:57:21.661761 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-05 00:57:21.661767 | orchestrator | Monday 05 January 2026 00:46:37 +0000 (0:00:01.571) 0:00:35.334 ******** 2026-01-05 00:57:21.661777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c0354e6--1633--54b4--ae3c--130b25b2cb6c-osd--block--3c0354e6--1633--54b4--ae3c--130b25b2cb6c', 'dm-uuid-LVM-1bNQGFvidc8nrkpPsYOdfDFIHYrFQGDVNYadW2DrsZLcIpedVCQWvwA5S76TEs8y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0807b7d--156a--51e9--a1ef--1ae613918df1-osd--block--a0807b7d--156a--51e9--a1ef--1ae613918df1', 'dm-uuid-LVM-tSTwV2iW4LjCTxZWgGeenUx76m7JqjXYZ2zcK3DpzSyV2V5mf7hYpMGRwS0AMSC3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4-osd--block--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4', 'dm-uuid-LVM-50FqfdBqQUcdg50l79EdQUKZYdvcLXdCBREro9BdYBXi8HBxSPBWBHTTiHZOlj0n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7959794c--cc9c--59d9--9b66--2faefa464ed4-osd--block--7959794c--cc9c--59d9--9b66--2faefa464ed4', 'dm-uuid-LVM-UK400EAe7rHq5oqlR20ULRKqz452MV7xjqxw9yWYFtxLz3ceifB7e1DtGshHklZH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.661940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662074 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662079 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3c0354e6--1633--54b4--ae3c--130b25b2cb6c-osd--block--3c0354e6--1633--54b4--ae3c--130b25b2cb6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wCF7O2-hVZ1-bGfi-mfQ0-cosE-pNsg-J8TXhC', 'scsi-0QEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20', 'scsi-SQEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a0807b7d--156a--51e9--a1ef--1ae613918df1-osd--block--a0807b7d--156a--51e9--a1ef--1ae613918df1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-R41xO0-GZQJ-UybA-uIr2-kVN3-mGCm-2mGCPc', 'scsi-0QEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2', 'scsi-SQEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8', 'scsi-SQEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662170 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662212 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662238 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662243 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662306 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part1', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part14', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part15', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part16', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662321 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4-osd--block--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mjpc8L-T1wk-iEqY-Gz09-7Sdg-NV65-N5qSru', 'scsi-0QEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1', 'scsi-SQEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662327 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1631feb6--d96c--5a43--89dd--a558edd73d68-osd--block--1631feb6--d96c--5a43--89dd--a558edd73d68', 'dm-uuid-LVM-WKe72EsaKn2scALuG4mViXkQfzDrjkxxfQ3uecey7fXGvXJLFsqzCQ4cHhvUZlrO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662332 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7959794c--cc9c--59d9--9b66--2faefa464ed4-osd--block--7959794c--cc9c--59d9--9b66--2faefa464ed4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HAdYPE-BZe5-AHeR-h3Wm-6WWQ-jTlU-5natTC', 'scsi-0QEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613', 'scsi-SQEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c322448e--6042--58d0--bdfa--5021630018c9-osd--block--c322448e--6042--58d0--bdfa--5021630018c9', 'dm-uuid-LVM-CtcQh9shLksalAwi1IDQOa7qdl8NvgvDeTgTlyxw1rg0IE7jAYC9SABwh2bAeub6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662384 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763', 'scsi-SQEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662410 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662415 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.662461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662483 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part1', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part14', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part15', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part16', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662552 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1631feb6--d96c--5a43--89dd--a558edd73d68-osd--block--1631feb6--d96c--5a43--89dd--a558edd73d68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yyCkfM-Sg2M-Ic4d-BhS3-Esz0-MnBo-U5zR0u', 'scsi-0QEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421', 'scsi-SQEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c322448e--6042--58d0--bdfa--5021630018c9-osd--block--c322448e--6042--58d0--bdfa--5021630018c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FdWrmS-YAjp-4MMF-PHJE-vNPl-hPln-goRiyk', 'scsi-0QEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678', 'scsi-SQEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662562 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a', 'scsi-SQEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662568 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.662607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662621 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662633 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662638 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662643 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662649 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662702 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662717 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662752 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662762 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662771 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662780 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662788 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.662850 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662880 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part1', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part14', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part15', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part16', 'scsi-SQEMU_QEMU_HARDDISK_3eca2ed8-76b6-4450-b1e2-293aa71f7a07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662890 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662899 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662965 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662974 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.662983 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.662989 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b353439-4187-4ebd-95db-cc3328d916f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663030 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663042 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.663050 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663056 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663061 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663066 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663072 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663077 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663120 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663130 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663136 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part1', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part14', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part15', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part16', 'scsi-SQEMU_QEMU_HARDDISK_be547545-78c3-41b9-a375-def0ee26b80a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663146 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:57:21.663151 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.663156 | orchestrator | 2026-01-05 00:57:21.663194 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-05 00:57:21.663201 | orchestrator | Monday 05 January 2026 00:46:38 +0000 (0:00:00.841) 0:00:36.176 ******** 2026-01-05 00:57:21.663207 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.663212 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.663217 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.663222 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.663227 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.663232 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.663237 | orchestrator | 2026-01-05 00:57:21.663242 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-05 00:57:21.663248 | orchestrator | Monday 05 January 2026 00:46:40 +0000 (0:00:01.549) 0:00:37.726 ******** 2026-01-05 00:57:21.663253 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.663258 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.663263 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.663268 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.663309 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.663315 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.663320 | orchestrator | 2026-01-05 00:57:21.663326 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 00:57:21.663331 | orchestrator | Monday 05 January 2026 00:46:40 +0000 (0:00:00.916) 0:00:38.642 ******** 2026-01-05 00:57:21.663336 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.663341 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.663346 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.663351 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.663357 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.663362 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.663367 | orchestrator | 2026-01-05 00:57:21.663372 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 00:57:21.663377 | orchestrator | Monday 05 January 2026 00:46:42 +0000 (0:00:01.493) 0:00:40.135 ******** 2026-01-05 00:57:21.663382 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.663388 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.663393 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.663398 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.663403 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.663408 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.663413 | orchestrator | 2026-01-05 00:57:21.663418 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 00:57:21.663423 | orchestrator | Monday 05 January 2026 00:46:43 +0000 (0:00:00.834) 0:00:40.969 ******** 2026-01-05 00:57:21.663428 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.663433 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.663438 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.663443 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.663448 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.663453 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.663463 | orchestrator | 2026-01-05 00:57:21.663469 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 00:57:21.663474 | orchestrator | Monday 05 January 2026 00:46:44 +0000 (0:00:01.143) 0:00:42.113 ******** 2026-01-05 00:57:21.663479 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.663484 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.663489 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.663494 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.663514 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.663520 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.663525 | orchestrator | 2026-01-05 00:57:21.663530 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-05 00:57:21.663535 | orchestrator | Monday 05 January 2026 00:46:45 +0000 (0:00:00.749) 0:00:42.862 ******** 2026-01-05 00:57:21.663540 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-05 00:57:21.663545 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-05 00:57:21.663550 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-05 00:57:21.663555 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-05 00:57:21.663560 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:57:21.663565 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-05 00:57:21.663570 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-05 00:57:21.663583 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 00:57:21.663588 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-05 00:57:21.663593 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 00:57:21.663598 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-05 00:57:21.663604 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-05 00:57:21.663609 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-05 00:57:21.663613 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-05 00:57:21.663671 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-05 00:57:21.663691 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-05 00:57:21.663697 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-05 00:57:21.663702 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-05 00:57:21.663707 | orchestrator | 2026-01-05 00:57:21.663712 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-05 00:57:21.663717 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:03.795) 0:00:46.657 ******** 2026-01-05 00:57:21.663722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:57:21.663727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:57:21.663733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:57:21.663738 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 00:57:21.663743 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 00:57:21.663749 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 00:57:21.663757 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.663762 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 00:57:21.663793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 00:57:21.663799 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.663805 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 00:57:21.663810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:57:21.663816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:57:21.663825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:57:21.663832 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.663840 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.663848 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 00:57:21.663864 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 00:57:21.663873 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 00:57:21.663887 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 00:57:21.663896 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.663905 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 00:57:21.663915 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 00:57:21.663925 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.663934 | orchestrator | 2026-01-05 00:57:21.663940 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-05 00:57:21.663946 | orchestrator | Monday 05 January 2026 00:46:49 +0000 (0:00:00.903) 0:00:47.561 ******** 2026-01-05 00:57:21.663952 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.663958 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.663964 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.663970 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.663976 | orchestrator | 2026-01-05 00:57:21.663982 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 00:57:21.663989 | orchestrator | Monday 05 January 2026 00:46:51 +0000 (0:00:01.356) 0:00:48.917 ******** 2026-01-05 00:57:21.663995 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664001 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.664007 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.664013 | orchestrator | 2026-01-05 00:57:21.664019 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 00:57:21.664025 | orchestrator | Monday 05 January 2026 00:46:51 +0000 (0:00:00.497) 0:00:49.415 ******** 2026-01-05 00:57:21.664031 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664037 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.664043 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.664049 | orchestrator | 2026-01-05 00:57:21.664055 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 00:57:21.664061 | orchestrator | Monday 05 January 2026 00:46:52 +0000 (0:00:00.543) 0:00:49.959 ******** 2026-01-05 00:57:21.664067 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664073 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.664079 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.664085 | orchestrator | 2026-01-05 00:57:21.664091 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 00:57:21.664097 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:01.146) 0:00:51.105 ******** 2026-01-05 00:57:21.664106 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.664116 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.664125 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.664134 | orchestrator | 2026-01-05 00:57:21.664141 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 00:57:21.664147 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.958) 0:00:52.064 ******** 2026-01-05 00:57:21.664153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.664159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.664166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.664175 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664185 | orchestrator | 2026-01-05 00:57:21.664194 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 00:57:21.664203 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.377) 0:00:52.441 ******** 2026-01-05 00:57:21.664212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.664220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.664234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.664242 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664251 | orchestrator | 2026-01-05 00:57:21.664259 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 00:57:21.664267 | orchestrator | Monday 05 January 2026 00:46:55 +0000 (0:00:00.362) 0:00:52.804 ******** 2026-01-05 00:57:21.664293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.664302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.664310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.664317 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664323 | orchestrator | 2026-01-05 00:57:21.664328 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 00:57:21.664333 | orchestrator | Monday 05 January 2026 00:46:55 +0000 (0:00:00.474) 0:00:53.278 ******** 2026-01-05 00:57:21.664338 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.664343 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.664348 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.664353 | orchestrator | 2026-01-05 00:57:21.664358 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 00:57:21.664363 | orchestrator | Monday 05 January 2026 00:46:56 +0000 (0:00:00.424) 0:00:53.703 ******** 2026-01-05 00:57:21.664368 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 00:57:21.664373 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 00:57:21.664400 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 00:57:21.664406 | orchestrator | 2026-01-05 00:57:21.664411 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-05 00:57:21.664416 | orchestrator | Monday 05 January 2026 00:46:57 +0000 (0:00:01.054) 0:00:54.757 ******** 2026-01-05 00:57:21.664421 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:57:21.664427 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:57:21.664432 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:57:21.664437 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 00:57:21.664442 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 00:57:21.664451 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 00:57:21.664456 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 00:57:21.664461 | orchestrator | 2026-01-05 00:57:21.664466 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-05 00:57:21.664471 | orchestrator | Monday 05 January 2026 00:46:58 +0000 (0:00:00.955) 0:00:55.713 ******** 2026-01-05 00:57:21.664476 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:57:21.664481 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:57:21.664486 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:57:21.664491 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 00:57:21.664496 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 00:57:21.664501 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 00:57:21.664506 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 00:57:21.664511 | orchestrator | 2026-01-05 00:57:21.664516 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:57:21.664521 | orchestrator | Monday 05 January 2026 00:47:00 +0000 (0:00:02.385) 0:00:58.098 ******** 2026-01-05 00:57:21.664527 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.664577 | orchestrator | 2026-01-05 00:57:21.664584 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:57:21.664589 | orchestrator | Monday 05 January 2026 00:47:02 +0000 (0:00:01.642) 0:00:59.740 ******** 2026-01-05 00:57:21.664594 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.664599 | orchestrator | 2026-01-05 00:57:21.664604 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:57:21.664610 | orchestrator | Monday 05 January 2026 00:47:03 +0000 (0:00:01.520) 0:01:01.260 ******** 2026-01-05 00:57:21.664615 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664620 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.664625 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.664630 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.664635 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.664640 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.664645 | orchestrator | 2026-01-05 00:57:21.664650 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:57:21.664655 | orchestrator | Monday 05 January 2026 00:47:05 +0000 (0:00:01.912) 0:01:03.172 ******** 2026-01-05 00:57:21.664660 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.664665 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.664670 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.664675 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.664680 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.664685 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.664690 | orchestrator | 2026-01-05 00:57:21.664695 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:57:21.664700 | orchestrator | Monday 05 January 2026 00:47:07 +0000 (0:00:01.584) 0:01:04.757 ******** 2026-01-05 00:57:21.664705 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.664710 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.664715 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.664720 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.664725 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.664730 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.664735 | orchestrator | 2026-01-05 00:57:21.664740 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:57:21.664745 | orchestrator | Monday 05 January 2026 00:47:08 +0000 (0:00:01.027) 0:01:05.785 ******** 2026-01-05 00:57:21.664750 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.664755 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.664760 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.664765 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.664770 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.664775 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.664780 | orchestrator | 2026-01-05 00:57:21.664785 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:57:21.664790 | orchestrator | Monday 05 January 2026 00:47:08 +0000 (0:00:00.809) 0:01:06.595 ******** 2026-01-05 00:57:21.664795 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664801 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.664806 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.664811 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.664816 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.664840 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.664846 | orchestrator | 2026-01-05 00:57:21.664852 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:57:21.664857 | orchestrator | Monday 05 January 2026 00:47:10 +0000 (0:00:01.710) 0:01:08.306 ******** 2026-01-05 00:57:21.664862 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664872 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.664880 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.664887 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.664892 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.664897 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.664902 | orchestrator | 2026-01-05 00:57:21.664907 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:57:21.664912 | orchestrator | Monday 05 January 2026 00:47:11 +0000 (0:00:00.820) 0:01:09.126 ******** 2026-01-05 00:57:21.664917 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.664925 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.664931 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.664936 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.664941 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.664946 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.664951 | orchestrator | 2026-01-05 00:57:21.664956 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:57:21.664963 | orchestrator | Monday 05 January 2026 00:47:12 +0000 (0:00:01.250) 0:01:10.377 ******** 2026-01-05 00:57:21.664972 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.664981 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.664990 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.664999 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.665008 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.665014 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.665019 | orchestrator | 2026-01-05 00:57:21.665024 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:57:21.665029 | orchestrator | Monday 05 January 2026 00:47:14 +0000 (0:00:01.524) 0:01:11.902 ******** 2026-01-05 00:57:21.665034 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.665039 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.665044 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.665049 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.665054 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.665059 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.665064 | orchestrator | 2026-01-05 00:57:21.665069 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:57:21.665074 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:01.680) 0:01:13.582 ******** 2026-01-05 00:57:21.665079 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.665084 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.665089 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.665094 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.665099 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.665104 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.665109 | orchestrator | 2026-01-05 00:57:21.665114 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:57:21.665119 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.855) 0:01:14.438 ******** 2026-01-05 00:57:21.665125 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.665130 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.665134 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.665139 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.665144 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.665149 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.665154 | orchestrator | 2026-01-05 00:57:21.665159 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:57:21.665164 | orchestrator | Monday 05 January 2026 00:47:17 +0000 (0:00:01.165) 0:01:15.603 ******** 2026-01-05 00:57:21.665169 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.665174 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.665179 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.665184 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.665189 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.665199 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.665204 | orchestrator | 2026-01-05 00:57:21.665209 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:57:21.665214 | orchestrator | Monday 05 January 2026 00:47:18 +0000 (0:00:00.733) 0:01:16.337 ******** 2026-01-05 00:57:21.665219 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.665224 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.665229 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.665234 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.665239 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.665244 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.665249 | orchestrator | 2026-01-05 00:57:21.665254 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:57:21.665259 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:00.775) 0:01:17.112 ******** 2026-01-05 00:57:21.665264 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.665309 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.665319 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.665327 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.665335 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.665344 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.665356 | orchestrator | 2026-01-05 00:57:21.665365 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:57:21.665374 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:00.512) 0:01:17.625 ******** 2026-01-05 00:57:21.665382 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.665390 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.665397 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.665406 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.665414 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.665423 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.665432 | orchestrator | 2026-01-05 00:57:21.665440 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:57:21.665449 | orchestrator | Monday 05 January 2026 00:47:20 +0000 (0:00:00.732) 0:01:18.358 ******** 2026-01-05 00:57:21.665463 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.665472 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.665480 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.665489 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.665526 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.665537 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.665546 | orchestrator | 2026-01-05 00:57:21.665555 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:57:21.665563 | orchestrator | Monday 05 January 2026 00:47:21 +0000 (0:00:00.619) 0:01:18.977 ******** 2026-01-05 00:57:21.665572 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.665578 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.665583 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.665588 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.665593 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.665598 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.665603 | orchestrator | 2026-01-05 00:57:21.665608 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:57:21.665613 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.830) 0:01:19.808 ******** 2026-01-05 00:57:21.665622 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.665627 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.665632 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.665637 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.665642 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.665647 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.665652 | orchestrator | 2026-01-05 00:57:21.665657 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:57:21.665662 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.832) 0:01:20.640 ******** 2026-01-05 00:57:21.665673 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.665678 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.665683 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.665687 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.665692 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.665697 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.665702 | orchestrator | 2026-01-05 00:57:21.665707 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-05 00:57:21.665712 | orchestrator | Monday 05 January 2026 00:47:25 +0000 (0:00:02.291) 0:01:22.932 ******** 2026-01-05 00:57:21.665717 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.665722 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.665727 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.665732 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.665737 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.665742 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.665747 | orchestrator | 2026-01-05 00:57:21.665752 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-05 00:57:21.665757 | orchestrator | Monday 05 January 2026 00:47:28 +0000 (0:00:03.013) 0:01:25.945 ******** 2026-01-05 00:57:21.665762 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.665767 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.665772 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.665777 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.665782 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.665787 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.665792 | orchestrator | 2026-01-05 00:57:21.665797 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-05 00:57:21.665802 | orchestrator | Monday 05 January 2026 00:47:31 +0000 (0:00:02.909) 0:01:28.855 ******** 2026-01-05 00:57:21.665807 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.665812 | orchestrator | 2026-01-05 00:57:21.665817 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-05 00:57:21.665822 | orchestrator | Monday 05 January 2026 00:47:33 +0000 (0:00:01.939) 0:01:30.794 ******** 2026-01-05 00:57:21.665827 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.665832 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.665837 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.665842 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.665847 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.665852 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.665857 | orchestrator | 2026-01-05 00:57:21.665862 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-05 00:57:21.665867 | orchestrator | Monday 05 January 2026 00:47:33 +0000 (0:00:00.773) 0:01:31.567 ******** 2026-01-05 00:57:21.665872 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.665877 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.665882 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.665887 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.665892 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.665897 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.665902 | orchestrator | 2026-01-05 00:57:21.665907 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-05 00:57:21.665912 | orchestrator | Monday 05 January 2026 00:47:35 +0000 (0:00:01.134) 0:01:32.702 ******** 2026-01-05 00:57:21.665917 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:57:21.665922 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:57:21.665927 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:57:21.665932 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:57:21.665940 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:57:21.665945 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:57:21.665950 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:57:21.665955 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:57:21.665960 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:57:21.665965 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:57:21.665989 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:57:21.665996 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:57:21.666001 | orchestrator | 2026-01-05 00:57:21.666006 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-05 00:57:21.666011 | orchestrator | Monday 05 January 2026 00:47:36 +0000 (0:00:01.789) 0:01:34.492 ******** 2026-01-05 00:57:21.666053 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.666058 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.666063 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.666068 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.666074 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.666079 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.666084 | orchestrator | 2026-01-05 00:57:21.666092 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-05 00:57:21.666098 | orchestrator | Monday 05 January 2026 00:47:38 +0000 (0:00:02.084) 0:01:36.576 ******** 2026-01-05 00:57:21.666103 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666108 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666113 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666118 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666124 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666129 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666134 | orchestrator | 2026-01-05 00:57:21.666140 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-05 00:57:21.666145 | orchestrator | Monday 05 January 2026 00:47:39 +0000 (0:00:00.685) 0:01:37.261 ******** 2026-01-05 00:57:21.666150 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666155 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666160 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666166 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666171 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666176 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666181 | orchestrator | 2026-01-05 00:57:21.666187 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-05 00:57:21.666192 | orchestrator | Monday 05 January 2026 00:47:40 +0000 (0:00:00.896) 0:01:38.158 ******** 2026-01-05 00:57:21.666197 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666202 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666208 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666213 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666218 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666223 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666228 | orchestrator | 2026-01-05 00:57:21.666234 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-05 00:57:21.666239 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.601) 0:01:38.759 ******** 2026-01-05 00:57:21.666245 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.666254 | orchestrator | 2026-01-05 00:57:21.666260 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-05 00:57:21.666265 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:01.228) 0:01:39.987 ******** 2026-01-05 00:57:21.666284 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.666293 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.666302 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.666311 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.666320 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.666329 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.666334 | orchestrator | 2026-01-05 00:57:21.666339 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-05 00:57:21.666344 | orchestrator | Monday 05 January 2026 00:48:38 +0000 (0:00:56.637) 0:02:36.625 ******** 2026-01-05 00:57:21.666349 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:57:21.666354 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:57:21.666359 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:57:21.666364 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666369 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:57:21.666374 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:57:21.666380 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:57:21.666385 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666390 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:57:21.666395 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:57:21.666400 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:57:21.666405 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666410 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:57:21.666418 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:57:21.666427 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:57:21.666432 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666437 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:57:21.666442 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:57:21.666447 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:57:21.666452 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666479 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:57:21.666486 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:57:21.666491 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:57:21.666496 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666501 | orchestrator | 2026-01-05 00:57:21.666506 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-05 00:57:21.666511 | orchestrator | Monday 05 January 2026 00:48:39 +0000 (0:00:00.707) 0:02:37.332 ******** 2026-01-05 00:57:21.666516 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666521 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666526 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666531 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666536 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666540 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666553 | orchestrator | 2026-01-05 00:57:21.666558 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-05 00:57:21.666569 | orchestrator | Monday 05 January 2026 00:48:40 +0000 (0:00:00.971) 0:02:38.304 ******** 2026-01-05 00:57:21.666578 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666587 | orchestrator | 2026-01-05 00:57:21.666596 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-05 00:57:21.666605 | orchestrator | Monday 05 January 2026 00:48:40 +0000 (0:00:00.172) 0:02:38.476 ******** 2026-01-05 00:57:21.666614 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666622 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666631 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666639 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666647 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666655 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666663 | orchestrator | 2026-01-05 00:57:21.666672 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-05 00:57:21.666681 | orchestrator | Monday 05 January 2026 00:48:41 +0000 (0:00:00.810) 0:02:39.287 ******** 2026-01-05 00:57:21.666689 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666698 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666706 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666714 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666723 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666731 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666740 | orchestrator | 2026-01-05 00:57:21.666748 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-05 00:57:21.666758 | orchestrator | Monday 05 January 2026 00:48:42 +0000 (0:00:01.033) 0:02:40.320 ******** 2026-01-05 00:57:21.666768 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666778 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666785 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666790 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666795 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666800 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666805 | orchestrator | 2026-01-05 00:57:21.666810 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-05 00:57:21.666816 | orchestrator | Monday 05 January 2026 00:48:43 +0000 (0:00:00.731) 0:02:41.052 ******** 2026-01-05 00:57:21.666821 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.666826 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.666831 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.666836 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.666841 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.666846 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.666851 | orchestrator | 2026-01-05 00:57:21.666856 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-05 00:57:21.666861 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:02.713) 0:02:43.766 ******** 2026-01-05 00:57:21.666866 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.666871 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.666876 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.666881 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.666886 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.666891 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.666896 | orchestrator | 2026-01-05 00:57:21.666901 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-05 00:57:21.666906 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:00.542) 0:02:44.308 ******** 2026-01-05 00:57:21.666911 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.666917 | orchestrator | 2026-01-05 00:57:21.666923 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-05 00:57:21.666928 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:01.106) 0:02:45.415 ******** 2026-01-05 00:57:21.666933 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666943 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666948 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666953 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.666958 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.666963 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.666969 | orchestrator | 2026-01-05 00:57:21.666974 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-05 00:57:21.666979 | orchestrator | Monday 05 January 2026 00:48:48 +0000 (0:00:00.875) 0:02:46.290 ******** 2026-01-05 00:57:21.666984 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.666989 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.666994 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.666999 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.667004 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.667009 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.667014 | orchestrator | 2026-01-05 00:57:21.667019 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-05 00:57:21.667028 | orchestrator | Monday 05 January 2026 00:48:49 +0000 (0:00:00.649) 0:02:46.940 ******** 2026-01-05 00:57:21.667035 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.667042 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.667095 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.667104 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.667113 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.667122 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.667130 | orchestrator | 2026-01-05 00:57:21.667138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-05 00:57:21.667147 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:00.772) 0:02:47.712 ******** 2026-01-05 00:57:21.667156 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.667164 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.667174 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.667180 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.667188 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.667194 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.667200 | orchestrator | 2026-01-05 00:57:21.667205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-05 00:57:21.667214 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:00.763) 0:02:48.475 ******** 2026-01-05 00:57:21.667220 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.667225 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.667230 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.667234 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.667239 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.667244 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.667249 | orchestrator | 2026-01-05 00:57:21.667255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-05 00:57:21.667260 | orchestrator | Monday 05 January 2026 00:48:51 +0000 (0:00:00.739) 0:02:49.215 ******** 2026-01-05 00:57:21.667265 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.667307 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.667314 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.667319 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.667324 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.667329 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.667334 | orchestrator | 2026-01-05 00:57:21.667339 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-05 00:57:21.667344 | orchestrator | Monday 05 January 2026 00:48:52 +0000 (0:00:00.644) 0:02:49.859 ******** 2026-01-05 00:57:21.667349 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.667354 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.667359 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.667364 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.667375 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.667380 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.667385 | orchestrator | 2026-01-05 00:57:21.667390 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-05 00:57:21.667395 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:00.887) 0:02:50.747 ******** 2026-01-05 00:57:21.667400 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.667405 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.667410 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.667415 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.667420 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.667425 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.667430 | orchestrator | 2026-01-05 00:57:21.667435 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-05 00:57:21.667440 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:00.672) 0:02:51.419 ******** 2026-01-05 00:57:21.667445 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.667450 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.667455 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.667460 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.667465 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.667470 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.667475 | orchestrator | 2026-01-05 00:57:21.667481 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-05 00:57:21.667490 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:01.282) 0:02:52.702 ******** 2026-01-05 00:57:21.667496 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.667501 | orchestrator | 2026-01-05 00:57:21.667506 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-05 00:57:21.667511 | orchestrator | Monday 05 January 2026 00:48:56 +0000 (0:00:01.228) 0:02:53.930 ******** 2026-01-05 00:57:21.667516 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-05 00:57:21.667521 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-05 00:57:21.667526 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-05 00:57:21.667531 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-05 00:57:21.667536 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-05 00:57:21.667541 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-05 00:57:21.667546 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-05 00:57:21.667551 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-05 00:57:21.667556 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-05 00:57:21.667561 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-05 00:57:21.667566 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-05 00:57:21.667571 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-05 00:57:21.667577 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-05 00:57:21.667586 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-05 00:57:21.667591 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-05 00:57:21.667596 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-05 00:57:21.667601 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-05 00:57:21.667606 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-05 00:57:21.667635 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-05 00:57:21.667642 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-05 00:57:21.667647 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-05 00:57:21.667652 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-05 00:57:21.667661 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-05 00:57:21.667666 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-05 00:57:21.667671 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-05 00:57:21.667676 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-05 00:57:21.667681 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-05 00:57:21.667686 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-05 00:57:21.667694 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-05 00:57:21.667699 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-05 00:57:21.667705 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-05 00:57:21.667710 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-05 00:57:21.667715 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-05 00:57:21.667720 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-05 00:57:21.667725 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-05 00:57:21.667730 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-05 00:57:21.667735 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-05 00:57:21.667740 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-05 00:57:21.667745 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-05 00:57:21.667751 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-05 00:57:21.667756 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-05 00:57:21.667761 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:57:21.667766 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:57:21.667771 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:57:21.667776 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-05 00:57:21.667781 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:57:21.667786 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:57:21.667791 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:57:21.667796 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:57:21.667802 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:57:21.667807 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:57:21.667812 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:57:21.667817 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:57:21.667822 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:57:21.667827 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:57:21.667832 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:57:21.667837 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:57:21.667842 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:57:21.667847 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:57:21.667852 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:57:21.667857 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:57:21.667862 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:57:21.667868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:57:21.667875 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:57:21.667881 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:57:21.667886 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:57:21.667891 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:57:21.667896 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:57:21.667901 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:57:21.667906 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:57:21.667911 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:57:21.667917 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:57:21.667922 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:57:21.667927 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:57:21.667932 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:57:21.667937 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:57:21.667958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:57:21.667964 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:57:21.667970 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:57:21.667975 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:57:21.667980 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:57:21.667985 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:57:21.667990 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:57:21.667995 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-05 00:57:21.668000 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-05 00:57:21.668008 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-05 00:57:21.668013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:57:21.668018 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-05 00:57:21.668024 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-05 00:57:21.668029 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-05 00:57:21.668034 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-05 00:57:21.668039 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-05 00:57:21.668044 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-05 00:57:21.668049 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-05 00:57:21.668054 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-05 00:57:21.668059 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-05 00:57:21.668064 | orchestrator | 2026-01-05 00:57:21.668069 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-05 00:57:21.668075 | orchestrator | Monday 05 January 2026 00:49:02 +0000 (0:00:06.594) 0:03:00.524 ******** 2026-01-05 00:57:21.668080 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668085 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668090 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668095 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.668100 | orchestrator | 2026-01-05 00:57:21.668105 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-05 00:57:21.668111 | orchestrator | Monday 05 January 2026 00:49:04 +0000 (0:00:01.323) 0:03:01.848 ******** 2026-01-05 00:57:21.668119 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.668125 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.668130 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.668135 | orchestrator | 2026-01-05 00:57:21.668140 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-05 00:57:21.668145 | orchestrator | Monday 05 January 2026 00:49:05 +0000 (0:00:01.322) 0:03:03.171 ******** 2026-01-05 00:57:21.668150 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.668155 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.668161 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.668170 | orchestrator | 2026-01-05 00:57:21.668179 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-05 00:57:21.668187 | orchestrator | Monday 05 January 2026 00:49:06 +0000 (0:00:01.386) 0:03:04.557 ******** 2026-01-05 00:57:21.668196 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.668204 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.668213 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.668220 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668229 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668237 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668245 | orchestrator | 2026-01-05 00:57:21.668254 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-05 00:57:21.668263 | orchestrator | Monday 05 January 2026 00:49:07 +0000 (0:00:00.868) 0:03:05.426 ******** 2026-01-05 00:57:21.668284 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.668293 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.668300 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.668309 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668318 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668327 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668335 | orchestrator | 2026-01-05 00:57:21.668344 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-05 00:57:21.668352 | orchestrator | Monday 05 January 2026 00:49:08 +0000 (0:00:01.214) 0:03:06.640 ******** 2026-01-05 00:57:21.668360 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.668369 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.668378 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.668386 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668395 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668404 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668413 | orchestrator | 2026-01-05 00:57:21.668449 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-05 00:57:21.668456 | orchestrator | Monday 05 January 2026 00:49:09 +0000 (0:00:00.774) 0:03:07.414 ******** 2026-01-05 00:57:21.668461 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.668466 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.668471 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.668476 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668481 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668487 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668492 | orchestrator | 2026-01-05 00:57:21.668497 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-05 00:57:21.668502 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:00.825) 0:03:08.240 ******** 2026-01-05 00:57:21.668513 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.668518 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.668523 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.668531 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668537 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668542 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668547 | orchestrator | 2026-01-05 00:57:21.668552 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-05 00:57:21.668557 | orchestrator | Monday 05 January 2026 00:49:11 +0000 (0:00:00.885) 0:03:09.125 ******** 2026-01-05 00:57:21.668563 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.668570 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.668581 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.668594 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668603 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668612 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668620 | orchestrator | 2026-01-05 00:57:21.668629 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-05 00:57:21.668637 | orchestrator | Monday 05 January 2026 00:49:12 +0000 (0:00:00.712) 0:03:09.837 ******** 2026-01-05 00:57:21.668646 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.668655 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.668664 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.668674 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668683 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668691 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668696 | orchestrator | 2026-01-05 00:57:21.668701 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-05 00:57:21.668706 | orchestrator | Monday 05 January 2026 00:49:12 +0000 (0:00:00.566) 0:03:10.404 ******** 2026-01-05 00:57:21.668711 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.668716 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.668721 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.668727 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668732 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668736 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668742 | orchestrator | 2026-01-05 00:57:21.668747 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-05 00:57:21.668752 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.649) 0:03:11.053 ******** 2026-01-05 00:57:21.668757 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668764 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668772 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668786 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.668794 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.668802 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.668810 | orchestrator | 2026-01-05 00:57:21.668818 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-05 00:57:21.668826 | orchestrator | Monday 05 January 2026 00:49:16 +0000 (0:00:03.185) 0:03:14.239 ******** 2026-01-05 00:57:21.668834 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.668842 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.668850 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.668858 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668865 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668873 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668880 | orchestrator | 2026-01-05 00:57:21.668888 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-05 00:57:21.668897 | orchestrator | Monday 05 January 2026 00:49:17 +0000 (0:00:00.752) 0:03:14.992 ******** 2026-01-05 00:57:21.668905 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.668965 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.668971 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.668976 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.668981 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.668986 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.668991 | orchestrator | 2026-01-05 00:57:21.668997 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-05 00:57:21.669002 | orchestrator | Monday 05 January 2026 00:49:17 +0000 (0:00:00.565) 0:03:15.558 ******** 2026-01-05 00:57:21.669007 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669013 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.669022 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.669030 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669038 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669047 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669055 | orchestrator | 2026-01-05 00:57:21.669064 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-05 00:57:21.669073 | orchestrator | Monday 05 January 2026 00:49:18 +0000 (0:00:00.796) 0:03:16.354 ******** 2026-01-05 00:57:21.669083 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.669092 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.669102 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.669111 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669157 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669169 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669178 | orchestrator | 2026-01-05 00:57:21.669187 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-05 00:57:21.669196 | orchestrator | Monday 05 January 2026 00:49:19 +0000 (0:00:00.655) 0:03:17.010 ******** 2026-01-05 00:57:21.669207 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-05 00:57:21.669224 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-05 00:57:21.669235 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-05 00:57:21.669245 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-05 00:57:21.669254 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669263 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-05 00:57:21.669288 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-05 00:57:21.669304 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.669309 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.669314 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669319 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669324 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669329 | orchestrator | 2026-01-05 00:57:21.669334 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-05 00:57:21.669340 | orchestrator | Monday 05 January 2026 00:49:20 +0000 (0:00:00.952) 0:03:17.963 ******** 2026-01-05 00:57:21.669345 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669350 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.669355 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.669360 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669364 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669370 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669375 | orchestrator | 2026-01-05 00:57:21.669380 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-05 00:57:21.669385 | orchestrator | Monday 05 January 2026 00:49:20 +0000 (0:00:00.560) 0:03:18.523 ******** 2026-01-05 00:57:21.669390 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669395 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.669400 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.669405 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669410 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669415 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669420 | orchestrator | 2026-01-05 00:57:21.669427 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 00:57:21.669436 | orchestrator | Monday 05 January 2026 00:49:21 +0000 (0:00:00.669) 0:03:19.193 ******** 2026-01-05 00:57:21.669444 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669452 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.669461 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.669469 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669478 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669486 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669495 | orchestrator | 2026-01-05 00:57:21.669503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 00:57:21.669511 | orchestrator | Monday 05 January 2026 00:49:22 +0000 (0:00:00.543) 0:03:19.736 ******** 2026-01-05 00:57:21.669518 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669526 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.669535 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.669544 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669552 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669560 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669569 | orchestrator | 2026-01-05 00:57:21.669577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 00:57:21.669616 | orchestrator | Monday 05 January 2026 00:49:22 +0000 (0:00:00.687) 0:03:20.423 ******** 2026-01-05 00:57:21.669626 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669635 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.669644 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.669652 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669661 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669670 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669678 | orchestrator | 2026-01-05 00:57:21.669686 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 00:57:21.669695 | orchestrator | Monday 05 January 2026 00:49:23 +0000 (0:00:00.597) 0:03:21.021 ******** 2026-01-05 00:57:21.669716 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.669725 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.669733 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669742 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.669751 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669763 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669769 | orchestrator | 2026-01-05 00:57:21.669774 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 00:57:21.669779 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.765) 0:03:21.786 ******** 2026-01-05 00:57:21.669784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.669789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.669794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.669799 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669804 | orchestrator | 2026-01-05 00:57:21.669810 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 00:57:21.669815 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.366) 0:03:22.153 ******** 2026-01-05 00:57:21.669820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.669825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.669830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.669835 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669840 | orchestrator | 2026-01-05 00:57:21.669845 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 00:57:21.669850 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.374) 0:03:22.528 ******** 2026-01-05 00:57:21.669855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.669860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.669865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.669870 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.669875 | orchestrator | 2026-01-05 00:57:21.669880 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 00:57:21.669885 | orchestrator | Monday 05 January 2026 00:49:25 +0000 (0:00:00.387) 0:03:22.916 ******** 2026-01-05 00:57:21.669890 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.669895 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.669900 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.669905 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669910 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669915 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669920 | orchestrator | 2026-01-05 00:57:21.669925 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 00:57:21.669930 | orchestrator | Monday 05 January 2026 00:49:25 +0000 (0:00:00.642) 0:03:23.558 ******** 2026-01-05 00:57:21.669935 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 00:57:21.669940 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 00:57:21.669945 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 00:57:21.669950 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-05 00:57:21.669955 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.669960 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-05 00:57:21.669965 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.669971 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-05 00:57:21.669975 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.669980 | orchestrator | 2026-01-05 00:57:21.669986 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-05 00:57:21.669991 | orchestrator | Monday 05 January 2026 00:49:27 +0000 (0:00:02.084) 0:03:25.642 ******** 2026-01-05 00:57:21.669996 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.670001 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.670009 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.670040 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.670046 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.670051 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.670056 | orchestrator | 2026-01-05 00:57:21.670061 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:57:21.670066 | orchestrator | Monday 05 January 2026 00:49:30 +0000 (0:00:02.960) 0:03:28.602 ******** 2026-01-05 00:57:21.670071 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.670076 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.670081 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.670086 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.670091 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.670096 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.670101 | orchestrator | 2026-01-05 00:57:21.670106 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-05 00:57:21.670112 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:01.165) 0:03:29.768 ******** 2026-01-05 00:57:21.670117 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670122 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.670127 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.670132 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.670137 | orchestrator | 2026-01-05 00:57:21.670143 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-05 00:57:21.670168 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:00.937) 0:03:30.706 ******** 2026-01-05 00:57:21.670174 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.670179 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.670184 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.670189 | orchestrator | 2026-01-05 00:57:21.670194 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-05 00:57:21.670199 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:00.298) 0:03:31.004 ******** 2026-01-05 00:57:21.670204 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.670210 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.670215 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.670220 | orchestrator | 2026-01-05 00:57:21.670225 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-05 00:57:21.670230 | orchestrator | Monday 05 January 2026 00:49:34 +0000 (0:00:01.395) 0:03:32.399 ******** 2026-01-05 00:57:21.670235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:57:21.670243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:57:21.670248 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:57:21.670253 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.670258 | orchestrator | 2026-01-05 00:57:21.670263 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-05 00:57:21.670268 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:00.915) 0:03:33.314 ******** 2026-01-05 00:57:21.670290 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.670299 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.670308 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.670316 | orchestrator | 2026-01-05 00:57:21.670325 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-05 00:57:21.670333 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:00.299) 0:03:33.614 ******** 2026-01-05 00:57:21.670343 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.670352 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.670361 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.670370 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.670377 | orchestrator | 2026-01-05 00:57:21.670387 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-05 00:57:21.670392 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:00.937) 0:03:34.551 ******** 2026-01-05 00:57:21.670397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.670403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.670408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.670413 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670418 | orchestrator | 2026-01-05 00:57:21.670423 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-05 00:57:21.670428 | orchestrator | Monday 05 January 2026 00:49:37 +0000 (0:00:00.433) 0:03:34.984 ******** 2026-01-05 00:57:21.670433 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670438 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.670443 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.670448 | orchestrator | 2026-01-05 00:57:21.670453 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-05 00:57:21.670458 | orchestrator | Monday 05 January 2026 00:49:37 +0000 (0:00:00.282) 0:03:35.267 ******** 2026-01-05 00:57:21.670463 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670468 | orchestrator | 2026-01-05 00:57:21.670473 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-05 00:57:21.670478 | orchestrator | Monday 05 January 2026 00:49:37 +0000 (0:00:00.214) 0:03:35.482 ******** 2026-01-05 00:57:21.670483 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670489 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.670494 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.670499 | orchestrator | 2026-01-05 00:57:21.670504 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-05 00:57:21.670509 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.337) 0:03:35.819 ******** 2026-01-05 00:57:21.670514 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670519 | orchestrator | 2026-01-05 00:57:21.670524 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-05 00:57:21.670529 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.192) 0:03:36.012 ******** 2026-01-05 00:57:21.670534 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670539 | orchestrator | 2026-01-05 00:57:21.670544 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-05 00:57:21.670549 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.196) 0:03:36.208 ******** 2026-01-05 00:57:21.670555 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670560 | orchestrator | 2026-01-05 00:57:21.670565 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-05 00:57:21.670570 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.098) 0:03:36.307 ******** 2026-01-05 00:57:21.670575 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670580 | orchestrator | 2026-01-05 00:57:21.670585 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-05 00:57:21.670590 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.195) 0:03:36.502 ******** 2026-01-05 00:57:21.670595 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670600 | orchestrator | 2026-01-05 00:57:21.670605 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-05 00:57:21.670610 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:00.556) 0:03:37.059 ******** 2026-01-05 00:57:21.670615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.670620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.670625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.670630 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670635 | orchestrator | 2026-01-05 00:57:21.670641 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-05 00:57:21.670680 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:00.380) 0:03:37.439 ******** 2026-01-05 00:57:21.670686 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670691 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.670696 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.670702 | orchestrator | 2026-01-05 00:57:21.670707 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-05 00:57:21.670712 | orchestrator | Monday 05 January 2026 00:49:40 +0000 (0:00:00.347) 0:03:37.786 ******** 2026-01-05 00:57:21.670717 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670722 | orchestrator | 2026-01-05 00:57:21.670727 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-05 00:57:21.670732 | orchestrator | Monday 05 January 2026 00:49:40 +0000 (0:00:00.212) 0:03:37.998 ******** 2026-01-05 00:57:21.670737 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670742 | orchestrator | 2026-01-05 00:57:21.670751 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-05 00:57:21.670756 | orchestrator | Monday 05 January 2026 00:49:40 +0000 (0:00:00.224) 0:03:38.223 ******** 2026-01-05 00:57:21.670761 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.670768 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.670777 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.670786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.670795 | orchestrator | 2026-01-05 00:57:21.670803 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-05 00:57:21.670811 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:00.852) 0:03:39.076 ******** 2026-01-05 00:57:21.670819 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.670828 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.670836 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.670845 | orchestrator | 2026-01-05 00:57:21.670855 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-05 00:57:21.670864 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:00.288) 0:03:39.365 ******** 2026-01-05 00:57:21.670873 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.670881 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.670886 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.670891 | orchestrator | 2026-01-05 00:57:21.670896 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-05 00:57:21.670901 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:01.170) 0:03:40.535 ******** 2026-01-05 00:57:21.670906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.670911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.670916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.670921 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.670926 | orchestrator | 2026-01-05 00:57:21.670931 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-05 00:57:21.670936 | orchestrator | Monday 05 January 2026 00:49:43 +0000 (0:00:00.728) 0:03:41.264 ******** 2026-01-05 00:57:21.670941 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.670947 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.670952 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.670957 | orchestrator | 2026-01-05 00:57:21.670962 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-05 00:57:21.670967 | orchestrator | Monday 05 January 2026 00:49:44 +0000 (0:00:00.421) 0:03:41.686 ******** 2026-01-05 00:57:21.670972 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.670977 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.670982 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.670987 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.670992 | orchestrator | 2026-01-05 00:57:21.671003 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-05 00:57:21.671008 | orchestrator | Monday 05 January 2026 00:49:44 +0000 (0:00:00.705) 0:03:42.392 ******** 2026-01-05 00:57:21.671013 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.671018 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.671023 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.671028 | orchestrator | 2026-01-05 00:57:21.671033 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-05 00:57:21.671038 | orchestrator | Monday 05 January 2026 00:49:45 +0000 (0:00:00.461) 0:03:42.854 ******** 2026-01-05 00:57:21.671043 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.671048 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.671053 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.671058 | orchestrator | 2026-01-05 00:57:21.671063 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-05 00:57:21.671068 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:01.170) 0:03:44.024 ******** 2026-01-05 00:57:21.671073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.671078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.671084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.671089 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.671094 | orchestrator | 2026-01-05 00:57:21.671099 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-05 00:57:21.671104 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:00.564) 0:03:44.589 ******** 2026-01-05 00:57:21.671109 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.671114 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.671119 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.671124 | orchestrator | 2026-01-05 00:57:21.671129 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-05 00:57:21.671134 | orchestrator | Monday 05 January 2026 00:49:47 +0000 (0:00:00.333) 0:03:44.923 ******** 2026-01-05 00:57:21.671139 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.671144 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.671149 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.671154 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671159 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671190 | orchestrator | 2026-01-05 00:57:21.671195 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-05 00:57:21.671200 | orchestrator | Monday 05 January 2026 00:49:47 +0000 (0:00:00.712) 0:03:45.635 ******** 2026-01-05 00:57:21.671205 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.671210 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.671215 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.671220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.671225 | orchestrator | 2026-01-05 00:57:21.671230 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-05 00:57:21.671235 | orchestrator | Monday 05 January 2026 00:49:48 +0000 (0:00:00.783) 0:03:46.418 ******** 2026-01-05 00:57:21.671240 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.671248 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.671254 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.671259 | orchestrator | 2026-01-05 00:57:21.671264 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-05 00:57:21.671304 | orchestrator | Monday 05 January 2026 00:49:49 +0000 (0:00:00.420) 0:03:46.839 ******** 2026-01-05 00:57:21.671311 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.671317 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.671322 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.671327 | orchestrator | 2026-01-05 00:57:21.671332 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-05 00:57:21.671341 | orchestrator | Monday 05 January 2026 00:49:50 +0000 (0:00:01.308) 0:03:48.147 ******** 2026-01-05 00:57:21.671347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:57:21.671352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:57:21.671357 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:57:21.671362 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671367 | orchestrator | 2026-01-05 00:57:21.671372 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-05 00:57:21.671377 | orchestrator | Monday 05 January 2026 00:49:51 +0000 (0:00:00.557) 0:03:48.705 ******** 2026-01-05 00:57:21.671382 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.671387 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.671392 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.671397 | orchestrator | 2026-01-05 00:57:21.671402 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-05 00:57:21.671407 | orchestrator | 2026-01-05 00:57:21.671412 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:57:21.671417 | orchestrator | Monday 05 January 2026 00:49:51 +0000 (0:00:00.502) 0:03:49.207 ******** 2026-01-05 00:57:21.671423 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.671428 | orchestrator | 2026-01-05 00:57:21.671433 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:57:21.671438 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:00.627) 0:03:49.835 ******** 2026-01-05 00:57:21.671443 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.671449 | orchestrator | 2026-01-05 00:57:21.671458 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:57:21.671464 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:00.477) 0:03:50.313 ******** 2026-01-05 00:57:21.671469 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.671474 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.671479 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.671484 | orchestrator | 2026-01-05 00:57:21.671489 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:57:21.671494 | orchestrator | Monday 05 January 2026 00:49:53 +0000 (0:00:00.949) 0:03:51.262 ******** 2026-01-05 00:57:21.671499 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671504 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671509 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671514 | orchestrator | 2026-01-05 00:57:21.671519 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:57:21.671524 | orchestrator | Monday 05 January 2026 00:49:53 +0000 (0:00:00.303) 0:03:51.566 ******** 2026-01-05 00:57:21.671529 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671535 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671540 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671545 | orchestrator | 2026-01-05 00:57:21.671550 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:57:21.671555 | orchestrator | Monday 05 January 2026 00:49:54 +0000 (0:00:00.316) 0:03:51.883 ******** 2026-01-05 00:57:21.671560 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671565 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671570 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671575 | orchestrator | 2026-01-05 00:57:21.671580 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:57:21.671585 | orchestrator | Monday 05 January 2026 00:49:54 +0000 (0:00:00.268) 0:03:52.151 ******** 2026-01-05 00:57:21.671590 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.671595 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.671600 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.671608 | orchestrator | 2026-01-05 00:57:21.671613 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:57:21.671618 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:00.872) 0:03:53.023 ******** 2026-01-05 00:57:21.671623 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671628 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671633 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671638 | orchestrator | 2026-01-05 00:57:21.671644 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:57:21.671649 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:00.296) 0:03:53.320 ******** 2026-01-05 00:57:21.671672 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671678 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671683 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671688 | orchestrator | 2026-01-05 00:57:21.671693 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:57:21.671698 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:00.290) 0:03:53.610 ******** 2026-01-05 00:57:21.671703 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.671708 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.671714 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.671718 | orchestrator | 2026-01-05 00:57:21.671724 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:57:21.671729 | orchestrator | Monday 05 January 2026 00:49:56 +0000 (0:00:00.704) 0:03:54.315 ******** 2026-01-05 00:57:21.671734 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.671739 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.671744 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.671749 | orchestrator | 2026-01-05 00:57:21.671756 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:57:21.671762 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:00.686) 0:03:55.002 ******** 2026-01-05 00:57:21.671767 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671772 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671777 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671782 | orchestrator | 2026-01-05 00:57:21.671787 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:57:21.671792 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:00.433) 0:03:55.435 ******** 2026-01-05 00:57:21.671797 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.671802 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.671807 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.671812 | orchestrator | 2026-01-05 00:57:21.671817 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:57:21.671822 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:00.388) 0:03:55.823 ******** 2026-01-05 00:57:21.671827 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671832 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671837 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671842 | orchestrator | 2026-01-05 00:57:21.671847 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:57:21.671853 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:00.347) 0:03:56.171 ******** 2026-01-05 00:57:21.671858 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671863 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671868 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671873 | orchestrator | 2026-01-05 00:57:21.671878 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:57:21.671883 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:00.306) 0:03:56.478 ******** 2026-01-05 00:57:21.671888 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671893 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671898 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671903 | orchestrator | 2026-01-05 00:57:21.671908 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:57:21.671916 | orchestrator | Monday 05 January 2026 00:49:59 +0000 (0:00:00.679) 0:03:57.158 ******** 2026-01-05 00:57:21.671921 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671926 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671931 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671936 | orchestrator | 2026-01-05 00:57:21.671942 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:57:21.671947 | orchestrator | Monday 05 January 2026 00:50:00 +0000 (0:00:00.677) 0:03:57.835 ******** 2026-01-05 00:57:21.671952 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.671957 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.671962 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.671967 | orchestrator | 2026-01-05 00:57:21.671972 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:57:21.671977 | orchestrator | Monday 05 January 2026 00:50:00 +0000 (0:00:00.424) 0:03:58.260 ******** 2026-01-05 00:57:21.671982 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.671987 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.671992 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.671997 | orchestrator | 2026-01-05 00:57:21.672002 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:57:21.672007 | orchestrator | Monday 05 January 2026 00:50:01 +0000 (0:00:00.510) 0:03:58.770 ******** 2026-01-05 00:57:21.672012 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672017 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.672022 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.672027 | orchestrator | 2026-01-05 00:57:21.672032 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:57:21.672037 | orchestrator | Monday 05 January 2026 00:50:01 +0000 (0:00:00.871) 0:03:59.642 ******** 2026-01-05 00:57:21.672042 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672047 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.672052 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.672057 | orchestrator | 2026-01-05 00:57:21.672062 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-05 00:57:21.672067 | orchestrator | Monday 05 January 2026 00:50:02 +0000 (0:00:00.601) 0:04:00.243 ******** 2026-01-05 00:57:21.672072 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672077 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.672082 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.672087 | orchestrator | 2026-01-05 00:57:21.672092 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-05 00:57:21.672097 | orchestrator | Monday 05 January 2026 00:50:02 +0000 (0:00:00.345) 0:04:00.589 ******** 2026-01-05 00:57:21.672103 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.672108 | orchestrator | 2026-01-05 00:57:21.672113 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-05 00:57:21.672118 | orchestrator | Monday 05 January 2026 00:50:03 +0000 (0:00:00.776) 0:04:01.365 ******** 2026-01-05 00:57:21.672123 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.672128 | orchestrator | 2026-01-05 00:57:21.672148 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-05 00:57:21.672154 | orchestrator | Monday 05 January 2026 00:50:03 +0000 (0:00:00.181) 0:04:01.547 ******** 2026-01-05 00:57:21.672159 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:57:21.672164 | orchestrator | 2026-01-05 00:57:21.672169 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-05 00:57:21.672174 | orchestrator | Monday 05 January 2026 00:50:04 +0000 (0:00:01.085) 0:04:02.633 ******** 2026-01-05 00:57:21.672179 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672185 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.672190 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.672195 | orchestrator | 2026-01-05 00:57:21.672203 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-05 00:57:21.672209 | orchestrator | Monday 05 January 2026 00:50:05 +0000 (0:00:00.395) 0:04:03.028 ******** 2026-01-05 00:57:21.672216 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672222 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.672227 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.672232 | orchestrator | 2026-01-05 00:57:21.672237 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-05 00:57:21.672242 | orchestrator | Monday 05 January 2026 00:50:05 +0000 (0:00:00.499) 0:04:03.528 ******** 2026-01-05 00:57:21.672247 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.672252 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.672257 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.672262 | orchestrator | 2026-01-05 00:57:21.672267 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-05 00:57:21.672287 | orchestrator | Monday 05 January 2026 00:50:07 +0000 (0:00:01.906) 0:04:05.434 ******** 2026-01-05 00:57:21.672293 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.672298 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.672303 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.672308 | orchestrator | 2026-01-05 00:57:21.672313 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-05 00:57:21.672318 | orchestrator | Monday 05 January 2026 00:50:08 +0000 (0:00:01.063) 0:04:06.497 ******** 2026-01-05 00:57:21.672323 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.672328 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.672333 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.672338 | orchestrator | 2026-01-05 00:57:21.672343 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-05 00:57:21.672348 | orchestrator | Monday 05 January 2026 00:50:09 +0000 (0:00:00.871) 0:04:07.368 ******** 2026-01-05 00:57:21.672353 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672358 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.672363 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.672368 | orchestrator | 2026-01-05 00:57:21.672373 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-05 00:57:21.672378 | orchestrator | Monday 05 January 2026 00:50:10 +0000 (0:00:00.762) 0:04:08.131 ******** 2026-01-05 00:57:21.672383 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.672388 | orchestrator | 2026-01-05 00:57:21.672394 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-05 00:57:21.672403 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:01.515) 0:04:09.646 ******** 2026-01-05 00:57:21.672411 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672420 | orchestrator | 2026-01-05 00:57:21.672429 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-05 00:57:21.672437 | orchestrator | Monday 05 January 2026 00:50:13 +0000 (0:00:01.273) 0:04:10.919 ******** 2026-01-05 00:57:21.672445 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:57:21.672453 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.672461 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.672469 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 00:57:21.672477 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:57:21.672485 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:57:21.672493 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:57:21.672502 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:57:21.672511 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-01-05 00:57:21.672520 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-05 00:57:21.672528 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-05 00:57:21.672548 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-05 00:57:21.672557 | orchestrator | 2026-01-05 00:57:21.672566 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-05 00:57:21.672575 | orchestrator | Monday 05 January 2026 00:50:16 +0000 (0:00:03.492) 0:04:14.411 ******** 2026-01-05 00:57:21.672583 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.672592 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.672600 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.672609 | orchestrator | 2026-01-05 00:57:21.672618 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-05 00:57:21.672627 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:01.202) 0:04:15.614 ******** 2026-01-05 00:57:21.672635 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672644 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.672657 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.672667 | orchestrator | 2026-01-05 00:57:21.672675 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-05 00:57:21.672682 | orchestrator | Monday 05 January 2026 00:50:18 +0000 (0:00:00.897) 0:04:16.511 ******** 2026-01-05 00:57:21.672691 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.672699 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.672707 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.672714 | orchestrator | 2026-01-05 00:57:21.672722 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-05 00:57:21.672730 | orchestrator | Monday 05 January 2026 00:50:19 +0000 (0:00:00.863) 0:04:17.375 ******** 2026-01-05 00:57:21.672738 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.672780 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.672787 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.672792 | orchestrator | 2026-01-05 00:57:21.672797 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-05 00:57:21.672803 | orchestrator | Monday 05 January 2026 00:50:22 +0000 (0:00:02.559) 0:04:19.934 ******** 2026-01-05 00:57:21.672808 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.672813 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.672818 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.672823 | orchestrator | 2026-01-05 00:57:21.672828 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-05 00:57:21.672833 | orchestrator | Monday 05 January 2026 00:50:23 +0000 (0:00:01.631) 0:04:21.566 ******** 2026-01-05 00:57:21.672838 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.672843 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.672852 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.672857 | orchestrator | 2026-01-05 00:57:21.672862 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-05 00:57:21.672868 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:00.392) 0:04:21.958 ******** 2026-01-05 00:57:21.672873 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.672878 | orchestrator | 2026-01-05 00:57:21.672883 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-05 00:57:21.672888 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:00.621) 0:04:22.580 ******** 2026-01-05 00:57:21.672893 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.672898 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.672903 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.672908 | orchestrator | 2026-01-05 00:57:21.672913 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-05 00:57:21.672918 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:00.228) 0:04:22.809 ******** 2026-01-05 00:57:21.672923 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.672928 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.672933 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.672938 | orchestrator | 2026-01-05 00:57:21.672943 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-05 00:57:21.672956 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:00.302) 0:04:23.111 ******** 2026-01-05 00:57:21.672961 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.672966 | orchestrator | 2026-01-05 00:57:21.672971 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-05 00:57:21.672976 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:00.636) 0:04:23.748 ******** 2026-01-05 00:57:21.672981 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.672986 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.672991 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.672996 | orchestrator | 2026-01-05 00:57:21.673001 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-05 00:57:21.673006 | orchestrator | Monday 05 January 2026 00:50:28 +0000 (0:00:02.386) 0:04:26.134 ******** 2026-01-05 00:57:21.673011 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.673016 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.673021 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.673026 | orchestrator | 2026-01-05 00:57:21.673031 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-05 00:57:21.673036 | orchestrator | Monday 05 January 2026 00:50:29 +0000 (0:00:01.341) 0:04:27.475 ******** 2026-01-05 00:57:21.673041 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.673046 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.673051 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.673056 | orchestrator | 2026-01-05 00:57:21.673061 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-05 00:57:21.673066 | orchestrator | Monday 05 January 2026 00:50:31 +0000 (0:00:01.750) 0:04:29.226 ******** 2026-01-05 00:57:21.673075 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.673083 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.673092 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.673100 | orchestrator | 2026-01-05 00:57:21.673107 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-05 00:57:21.673116 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:02.207) 0:04:31.434 ******** 2026-01-05 00:57:21.673125 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.673134 | orchestrator | 2026-01-05 00:57:21.673144 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-05 00:57:21.673150 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.653) 0:04:32.087 ******** 2026-01-05 00:57:21.673155 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.673164 | orchestrator | 2026-01-05 00:57:21.673173 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-05 00:57:21.673181 | orchestrator | Monday 05 January 2026 00:50:35 +0000 (0:00:01.435) 0:04:33.523 ******** 2026-01-05 00:57:21.673189 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.673197 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.673206 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.673214 | orchestrator | 2026-01-05 00:57:21.673223 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-05 00:57:21.673232 | orchestrator | Monday 05 January 2026 00:50:45 +0000 (0:00:09.480) 0:04:43.004 ******** 2026-01-05 00:57:21.673240 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673253 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.673262 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.673288 | orchestrator | 2026-01-05 00:57:21.673297 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-05 00:57:21.673306 | orchestrator | Monday 05 January 2026 00:50:46 +0000 (0:00:00.697) 0:04:43.701 ******** 2026-01-05 00:57:21.673346 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7b6a2b9fad08f3d2641ec6e4c7930eb631f5e33f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-05 00:57:21.673364 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7b6a2b9fad08f3d2641ec6e4c7930eb631f5e33f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-05 00:57:21.673370 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7b6a2b9fad08f3d2641ec6e4c7930eb631f5e33f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-05 00:57:21.673376 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7b6a2b9fad08f3d2641ec6e4c7930eb631f5e33f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-05 00:57:21.673382 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7b6a2b9fad08f3d2641ec6e4c7930eb631f5e33f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-05 00:57:21.673387 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7b6a2b9fad08f3d2641ec6e4c7930eb631f5e33f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__7b6a2b9fad08f3d2641ec6e4c7930eb631f5e33f'}])  2026-01-05 00:57:21.673393 | orchestrator | 2026-01-05 00:57:21.673398 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:57:21.673404 | orchestrator | Monday 05 January 2026 00:51:01 +0000 (0:00:15.400) 0:04:59.101 ******** 2026-01-05 00:57:21.673409 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673414 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.673419 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.673424 | orchestrator | 2026-01-05 00:57:21.673429 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-05 00:57:21.673434 | orchestrator | Monday 05 January 2026 00:51:01 +0000 (0:00:00.421) 0:04:59.523 ******** 2026-01-05 00:57:21.673439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.673444 | orchestrator | 2026-01-05 00:57:21.673449 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-05 00:57:21.673454 | orchestrator | Monday 05 January 2026 00:51:02 +0000 (0:00:00.916) 0:05:00.440 ******** 2026-01-05 00:57:21.673459 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.673464 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.673469 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.673475 | orchestrator | 2026-01-05 00:57:21.673480 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-05 00:57:21.673485 | orchestrator | Monday 05 January 2026 00:51:03 +0000 (0:00:00.350) 0:05:00.791 ******** 2026-01-05 00:57:21.673490 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673495 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.673500 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.673510 | orchestrator | 2026-01-05 00:57:21.673515 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-05 00:57:21.673520 | orchestrator | Monday 05 January 2026 00:51:03 +0000 (0:00:00.367) 0:05:01.159 ******** 2026-01-05 00:57:21.673525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:57:21.673530 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:57:21.673535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:57:21.673543 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673551 | orchestrator | 2026-01-05 00:57:21.673563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-05 00:57:21.673572 | orchestrator | Monday 05 January 2026 00:51:04 +0000 (0:00:01.032) 0:05:02.192 ******** 2026-01-05 00:57:21.673580 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.673588 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.673596 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.673603 | orchestrator | 2026-01-05 00:57:21.673611 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-05 00:57:21.673619 | orchestrator | 2026-01-05 00:57:21.673656 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:57:21.673666 | orchestrator | Monday 05 January 2026 00:51:05 +0000 (0:00:01.107) 0:05:03.299 ******** 2026-01-05 00:57:21.673675 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-01-05 00:57:21.673684 | orchestrator | 2026-01-05 00:57:21.673689 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:57:21.673694 | orchestrator | Monday 05 January 2026 00:51:06 +0000 (0:00:00.599) 0:05:03.898 ******** 2026-01-05 00:57:21.673699 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.673704 | orchestrator | 2026-01-05 00:57:21.673713 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:57:21.673718 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:00.904) 0:05:04.803 ******** 2026-01-05 00:57:21.673723 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.673728 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.673734 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.673739 | orchestrator | 2026-01-05 00:57:21.673744 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:57:21.673749 | orchestrator | Monday 05 January 2026 00:51:08 +0000 (0:00:01.023) 0:05:05.826 ******** 2026-01-05 00:57:21.673754 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673759 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.673764 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.673769 | orchestrator | 2026-01-05 00:57:21.673774 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:57:21.673779 | orchestrator | Monday 05 January 2026 00:51:08 +0000 (0:00:00.337) 0:05:06.164 ******** 2026-01-05 00:57:21.673784 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673789 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.673794 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.673799 | orchestrator | 2026-01-05 00:57:21.673804 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:57:21.673809 | orchestrator | Monday 05 January 2026 00:51:09 +0000 (0:00:00.660) 0:05:06.824 ******** 2026-01-05 00:57:21.673814 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673819 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.673824 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.673829 | orchestrator | 2026-01-05 00:57:21.673834 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:57:21.673839 | orchestrator | Monday 05 January 2026 00:51:09 +0000 (0:00:00.326) 0:05:07.150 ******** 2026-01-05 00:57:21.673845 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.673859 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.673867 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.673874 | orchestrator | 2026-01-05 00:57:21.673881 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:57:21.673888 | orchestrator | Monday 05 January 2026 00:51:10 +0000 (0:00:00.785) 0:05:07.936 ******** 2026-01-05 00:57:21.673895 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673904 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.673911 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.673918 | orchestrator | 2026-01-05 00:57:21.673927 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:57:21.673936 | orchestrator | Monday 05 January 2026 00:51:10 +0000 (0:00:00.386) 0:05:08.323 ******** 2026-01-05 00:57:21.673946 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.673951 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.673956 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.673961 | orchestrator | 2026-01-05 00:57:21.673966 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:57:21.673971 | orchestrator | Monday 05 January 2026 00:51:11 +0000 (0:00:00.351) 0:05:08.675 ******** 2026-01-05 00:57:21.673976 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.673981 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.673986 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.673991 | orchestrator | 2026-01-05 00:57:21.673996 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:57:21.674001 | orchestrator | Monday 05 January 2026 00:51:12 +0000 (0:00:01.200) 0:05:09.875 ******** 2026-01-05 00:57:21.674006 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.674034 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.674041 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.674046 | orchestrator | 2026-01-05 00:57:21.674051 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:57:21.674056 | orchestrator | Monday 05 January 2026 00:51:12 +0000 (0:00:00.746) 0:05:10.622 ******** 2026-01-05 00:57:21.674061 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674066 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674071 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674076 | orchestrator | 2026-01-05 00:57:21.674081 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:57:21.674086 | orchestrator | Monday 05 January 2026 00:51:13 +0000 (0:00:00.391) 0:05:11.014 ******** 2026-01-05 00:57:21.674091 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.674097 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.674102 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.674106 | orchestrator | 2026-01-05 00:57:21.674112 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:57:21.674117 | orchestrator | Monday 05 January 2026 00:51:13 +0000 (0:00:00.369) 0:05:11.383 ******** 2026-01-05 00:57:21.674122 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674127 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674132 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674137 | orchestrator | 2026-01-05 00:57:21.674142 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:57:21.674147 | orchestrator | Monday 05 January 2026 00:51:14 +0000 (0:00:00.634) 0:05:12.018 ******** 2026-01-05 00:57:21.674152 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674157 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674181 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674187 | orchestrator | 2026-01-05 00:57:21.674192 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:57:21.674198 | orchestrator | Monday 05 January 2026 00:51:14 +0000 (0:00:00.324) 0:05:12.342 ******** 2026-01-05 00:57:21.674203 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674208 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674218 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674223 | orchestrator | 2026-01-05 00:57:21.674231 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:57:21.674239 | orchestrator | Monday 05 January 2026 00:51:15 +0000 (0:00:00.339) 0:05:12.682 ******** 2026-01-05 00:57:21.674245 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674250 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674255 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674260 | orchestrator | 2026-01-05 00:57:21.674268 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:57:21.674309 | orchestrator | Monday 05 January 2026 00:51:15 +0000 (0:00:00.377) 0:05:13.059 ******** 2026-01-05 00:57:21.674315 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674320 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674325 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674330 | orchestrator | 2026-01-05 00:57:21.674335 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:57:21.674340 | orchestrator | Monday 05 January 2026 00:51:16 +0000 (0:00:00.649) 0:05:13.709 ******** 2026-01-05 00:57:21.674345 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.674350 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.674356 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.674361 | orchestrator | 2026-01-05 00:57:21.674366 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:57:21.674371 | orchestrator | Monday 05 January 2026 00:51:16 +0000 (0:00:00.408) 0:05:14.118 ******** 2026-01-05 00:57:21.674376 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.674381 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.674386 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.674391 | orchestrator | 2026-01-05 00:57:21.674396 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:57:21.674401 | orchestrator | Monday 05 January 2026 00:51:16 +0000 (0:00:00.368) 0:05:14.486 ******** 2026-01-05 00:57:21.674406 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.674411 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.674416 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.674421 | orchestrator | 2026-01-05 00:57:21.674426 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-05 00:57:21.674431 | orchestrator | Monday 05 January 2026 00:51:17 +0000 (0:00:00.870) 0:05:15.357 ******** 2026-01-05 00:57:21.674436 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:57:21.674442 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:57:21.674447 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:57:21.674452 | orchestrator | 2026-01-05 00:57:21.674457 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-05 00:57:21.674462 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.743) 0:05:16.100 ******** 2026-01-05 00:57:21.674467 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.674472 | orchestrator | 2026-01-05 00:57:21.674477 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-05 00:57:21.674482 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.558) 0:05:16.658 ******** 2026-01-05 00:57:21.674488 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.674493 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.674498 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.674503 | orchestrator | 2026-01-05 00:57:21.674508 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-05 00:57:21.674513 | orchestrator | Monday 05 January 2026 00:51:19 +0000 (0:00:00.752) 0:05:17.411 ******** 2026-01-05 00:57:21.674518 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674523 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674528 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674537 | orchestrator | 2026-01-05 00:57:21.674542 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-05 00:57:21.674547 | orchestrator | Monday 05 January 2026 00:51:20 +0000 (0:00:00.602) 0:05:18.013 ******** 2026-01-05 00:57:21.674552 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:57:21.674558 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:57:21.674563 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:57:21.674568 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-05 00:57:21.674573 | orchestrator | 2026-01-05 00:57:21.674578 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-05 00:57:21.674583 | orchestrator | Monday 05 January 2026 00:51:29 +0000 (0:00:09.356) 0:05:27.369 ******** 2026-01-05 00:57:21.674588 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.674593 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.674598 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.674603 | orchestrator | 2026-01-05 00:57:21.674608 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-05 00:57:21.674613 | orchestrator | Monday 05 January 2026 00:51:30 +0000 (0:00:00.386) 0:05:27.756 ******** 2026-01-05 00:57:21.674618 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 00:57:21.674624 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 00:57:21.674629 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 00:57:21.674634 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.674639 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-05 00:57:21.674644 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.674649 | orchestrator | 2026-01-05 00:57:21.674675 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:57:21.674681 | orchestrator | Monday 05 January 2026 00:51:32 +0000 (0:00:02.446) 0:05:30.203 ******** 2026-01-05 00:57:21.674686 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 00:57:21.674692 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 00:57:21.674697 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 00:57:21.674702 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:57:21.674707 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 00:57:21.674712 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 00:57:21.674717 | orchestrator | 2026-01-05 00:57:21.674722 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-05 00:57:21.674727 | orchestrator | Monday 05 January 2026 00:51:33 +0000 (0:00:01.187) 0:05:31.391 ******** 2026-01-05 00:57:21.674732 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.674741 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.674746 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.674751 | orchestrator | 2026-01-05 00:57:21.674756 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-05 00:57:21.674762 | orchestrator | Monday 05 January 2026 00:51:35 +0000 (0:00:01.623) 0:05:33.014 ******** 2026-01-05 00:57:21.674771 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674779 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674788 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674799 | orchestrator | 2026-01-05 00:57:21.674807 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-05 00:57:21.674815 | orchestrator | Monday 05 January 2026 00:51:35 +0000 (0:00:00.443) 0:05:33.458 ******** 2026-01-05 00:57:21.674823 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674830 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674838 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674847 | orchestrator | 2026-01-05 00:57:21.674854 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-05 00:57:21.674862 | orchestrator | Monday 05 January 2026 00:51:36 +0000 (0:00:00.346) 0:05:33.804 ******** 2026-01-05 00:57:21.674876 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.674883 | orchestrator | 2026-01-05 00:57:21.674891 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-05 00:57:21.674899 | orchestrator | Monday 05 January 2026 00:51:37 +0000 (0:00:00.929) 0:05:34.733 ******** 2026-01-05 00:57:21.674907 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674915 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674923 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674932 | orchestrator | 2026-01-05 00:57:21.674940 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-05 00:57:21.674948 | orchestrator | Monday 05 January 2026 00:51:37 +0000 (0:00:00.388) 0:05:35.121 ******** 2026-01-05 00:57:21.674956 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.674961 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.674965 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.674970 | orchestrator | 2026-01-05 00:57:21.674975 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-05 00:57:21.674980 | orchestrator | Monday 05 January 2026 00:51:37 +0000 (0:00:00.354) 0:05:35.476 ******** 2026-01-05 00:57:21.674985 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.674989 | orchestrator | 2026-01-05 00:57:21.674994 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-05 00:57:21.674999 | orchestrator | Monday 05 January 2026 00:51:38 +0000 (0:00:00.590) 0:05:36.066 ******** 2026-01-05 00:57:21.675004 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.675009 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.675013 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.675018 | orchestrator | 2026-01-05 00:57:21.675023 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-05 00:57:21.675028 | orchestrator | Monday 05 January 2026 00:51:39 +0000 (0:00:01.470) 0:05:37.537 ******** 2026-01-05 00:57:21.675032 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.675037 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.675042 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.675047 | orchestrator | 2026-01-05 00:57:21.675051 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-05 00:57:21.675056 | orchestrator | Monday 05 January 2026 00:51:40 +0000 (0:00:01.073) 0:05:38.610 ******** 2026-01-05 00:57:21.675061 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.675066 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.675071 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.675075 | orchestrator | 2026-01-05 00:57:21.675080 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-05 00:57:21.675085 | orchestrator | Monday 05 January 2026 00:51:42 +0000 (0:00:01.532) 0:05:40.142 ******** 2026-01-05 00:57:21.675090 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.675094 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.675099 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.675104 | orchestrator | 2026-01-05 00:57:21.675109 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-05 00:57:21.675113 | orchestrator | Monday 05 January 2026 00:51:44 +0000 (0:00:01.838) 0:05:41.981 ******** 2026-01-05 00:57:21.675118 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.675123 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.675128 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-05 00:57:21.675132 | orchestrator | 2026-01-05 00:57:21.675137 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-05 00:57:21.675142 | orchestrator | Monday 05 January 2026 00:51:45 +0000 (0:00:00.717) 0:05:42.698 ******** 2026-01-05 00:57:21.675147 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-05 00:57:21.675184 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-05 00:57:21.675190 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-05 00:57:21.675195 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-05 00:57:21.675200 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-05 00:57:21.675205 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-05 00:57:21.675210 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.675214 | orchestrator | 2026-01-05 00:57:21.675223 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-05 00:57:21.675228 | orchestrator | Monday 05 January 2026 00:52:20 +0000 (0:00:35.962) 0:06:18.661 ******** 2026-01-05 00:57:21.675233 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.675238 | orchestrator | 2026-01-05 00:57:21.675243 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-05 00:57:21.675248 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:01.327) 0:06:19.988 ******** 2026-01-05 00:57:21.675252 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.675257 | orchestrator | 2026-01-05 00:57:21.675262 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-05 00:57:21.675267 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:00.344) 0:06:20.333 ******** 2026-01-05 00:57:21.675285 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.675291 | orchestrator | 2026-01-05 00:57:21.675296 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-05 00:57:21.675300 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:00.134) 0:06:20.467 ******** 2026-01-05 00:57:21.675305 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-05 00:57:21.675310 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-05 00:57:21.675315 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-05 00:57:21.675320 | orchestrator | 2026-01-05 00:57:21.675324 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-05 00:57:21.675329 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:06.935) 0:06:27.403 ******** 2026-01-05 00:57:21.675334 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-05 00:57:21.675339 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-05 00:57:21.675344 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-05 00:57:21.675348 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-05 00:57:21.675353 | orchestrator | 2026-01-05 00:57:21.675358 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:57:21.675363 | orchestrator | Monday 05 January 2026 00:52:34 +0000 (0:00:05.255) 0:06:32.658 ******** 2026-01-05 00:57:21.675367 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.675372 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.675377 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.675382 | orchestrator | 2026-01-05 00:57:21.675387 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-05 00:57:21.675391 | orchestrator | Monday 05 January 2026 00:52:35 +0000 (0:00:00.727) 0:06:33.385 ******** 2026-01-05 00:57:21.675396 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.675401 | orchestrator | 2026-01-05 00:57:21.675406 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-05 00:57:21.675418 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:00.498) 0:06:33.884 ******** 2026-01-05 00:57:21.675423 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.675427 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.675432 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.675437 | orchestrator | 2026-01-05 00:57:21.675442 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-05 00:57:21.675447 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:00.444) 0:06:34.329 ******** 2026-01-05 00:57:21.675451 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.675456 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.675461 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.675466 | orchestrator | 2026-01-05 00:57:21.675470 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-05 00:57:21.675475 | orchestrator | Monday 05 January 2026 00:52:37 +0000 (0:00:01.193) 0:06:35.522 ******** 2026-01-05 00:57:21.675480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:57:21.675485 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:57:21.675490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:57:21.675494 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.675499 | orchestrator | 2026-01-05 00:57:21.675504 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-05 00:57:21.675509 | orchestrator | Monday 05 January 2026 00:52:38 +0000 (0:00:00.630) 0:06:36.153 ******** 2026-01-05 00:57:21.675513 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.675518 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.675523 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.675527 | orchestrator | 2026-01-05 00:57:21.675532 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-05 00:57:21.675537 | orchestrator | 2026-01-05 00:57:21.675542 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:57:21.675547 | orchestrator | Monday 05 January 2026 00:52:39 +0000 (0:00:00.877) 0:06:37.030 ******** 2026-01-05 00:57:21.675570 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.675576 | orchestrator | 2026-01-05 00:57:21.675581 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:57:21.675586 | orchestrator | Monday 05 January 2026 00:52:39 +0000 (0:00:00.613) 0:06:37.644 ******** 2026-01-05 00:57:21.675590 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.675595 | orchestrator | 2026-01-05 00:57:21.675600 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:57:21.675605 | orchestrator | Monday 05 January 2026 00:52:40 +0000 (0:00:00.619) 0:06:38.263 ******** 2026-01-05 00:57:21.675609 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.675617 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.675622 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.675627 | orchestrator | 2026-01-05 00:57:21.675632 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:57:21.675637 | orchestrator | Monday 05 January 2026 00:52:41 +0000 (0:00:00.677) 0:06:38.940 ******** 2026-01-05 00:57:21.675641 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.675646 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.675651 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.675656 | orchestrator | 2026-01-05 00:57:21.675661 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:57:21.675665 | orchestrator | Monday 05 January 2026 00:52:42 +0000 (0:00:00.891) 0:06:39.832 ******** 2026-01-05 00:57:21.675670 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.675675 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.675680 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.675688 | orchestrator | 2026-01-05 00:57:21.675693 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:57:21.675697 | orchestrator | Monday 05 January 2026 00:52:42 +0000 (0:00:00.727) 0:06:40.559 ******** 2026-01-05 00:57:21.675702 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.675707 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.675712 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.675716 | orchestrator | 2026-01-05 00:57:21.675721 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:57:21.675726 | orchestrator | Monday 05 January 2026 00:52:43 +0000 (0:00:00.785) 0:06:41.345 ******** 2026-01-05 00:57:21.675731 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.675736 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.675740 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.675745 | orchestrator | 2026-01-05 00:57:21.675750 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:57:21.675755 | orchestrator | Monday 05 January 2026 00:52:44 +0000 (0:00:00.692) 0:06:42.037 ******** 2026-01-05 00:57:21.675759 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.675764 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.675769 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.675774 | orchestrator | 2026-01-05 00:57:21.675778 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:57:21.675783 | orchestrator | Monday 05 January 2026 00:52:44 +0000 (0:00:00.318) 0:06:42.356 ******** 2026-01-05 00:57:21.675788 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.675793 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.675797 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.675802 | orchestrator | 2026-01-05 00:57:21.675807 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:57:21.675812 | orchestrator | Monday 05 January 2026 00:52:45 +0000 (0:00:00.320) 0:06:42.677 ******** 2026-01-05 00:57:21.675816 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.675821 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.675826 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.675830 | orchestrator | 2026-01-05 00:57:21.675835 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:57:21.675840 | orchestrator | Monday 05 January 2026 00:52:46 +0000 (0:00:01.148) 0:06:43.826 ******** 2026-01-05 00:57:21.675845 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.675849 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.675854 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.675859 | orchestrator | 2026-01-05 00:57:21.675864 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:57:21.675868 | orchestrator | Monday 05 January 2026 00:52:47 +0000 (0:00:01.180) 0:06:45.007 ******** 2026-01-05 00:57:21.675873 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.675878 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.675883 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.675888 | orchestrator | 2026-01-05 00:57:21.675892 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:57:21.675897 | orchestrator | Monday 05 January 2026 00:52:47 +0000 (0:00:00.309) 0:06:45.316 ******** 2026-01-05 00:57:21.675902 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.675906 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.675911 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.675916 | orchestrator | 2026-01-05 00:57:21.675921 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:57:21.675925 | orchestrator | Monday 05 January 2026 00:52:48 +0000 (0:00:00.353) 0:06:45.669 ******** 2026-01-05 00:57:21.675930 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.675935 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.675940 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.675944 | orchestrator | 2026-01-05 00:57:21.675949 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:57:21.675957 | orchestrator | Monday 05 January 2026 00:52:48 +0000 (0:00:00.410) 0:06:46.080 ******** 2026-01-05 00:57:21.675962 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.675967 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.675971 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.675976 | orchestrator | 2026-01-05 00:57:21.675981 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:57:21.675986 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:00.638) 0:06:46.718 ******** 2026-01-05 00:57:21.675991 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.675995 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.676003 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.676008 | orchestrator | 2026-01-05 00:57:21.676013 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:57:21.676018 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:00.359) 0:06:47.077 ******** 2026-01-05 00:57:21.676023 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.676027 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.676032 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.676037 | orchestrator | 2026-01-05 00:57:21.676042 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:57:21.676046 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:00.374) 0:06:47.451 ******** 2026-01-05 00:57:21.676051 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.676056 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.676061 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.676065 | orchestrator | 2026-01-05 00:57:21.676072 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:57:21.676077 | orchestrator | Monday 05 January 2026 00:52:50 +0000 (0:00:00.330) 0:06:47.782 ******** 2026-01-05 00:57:21.676082 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.676087 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.676092 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.676096 | orchestrator | 2026-01-05 00:57:21.676101 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:57:21.676106 | orchestrator | Monday 05 January 2026 00:52:50 +0000 (0:00:00.474) 0:06:48.256 ******** 2026-01-05 00:57:21.676111 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.676115 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.676120 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.676125 | orchestrator | 2026-01-05 00:57:21.676130 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:57:21.676135 | orchestrator | Monday 05 January 2026 00:52:50 +0000 (0:00:00.322) 0:06:48.578 ******** 2026-01-05 00:57:21.676139 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.676144 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.676149 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.676154 | orchestrator | 2026-01-05 00:57:21.676158 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-05 00:57:21.676163 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:00.490) 0:06:49.069 ******** 2026-01-05 00:57:21.676168 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.676172 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.676177 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.676182 | orchestrator | 2026-01-05 00:57:21.676187 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-05 00:57:21.676191 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:00.490) 0:06:49.559 ******** 2026-01-05 00:57:21.676196 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:57:21.676201 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:57:21.676206 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:57:21.676210 | orchestrator | 2026-01-05 00:57:21.676215 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-05 00:57:21.676223 | orchestrator | Monday 05 January 2026 00:52:52 +0000 (0:00:00.567) 0:06:50.127 ******** 2026-01-05 00:57:21.676228 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.676233 | orchestrator | 2026-01-05 00:57:21.676237 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-05 00:57:21.676242 | orchestrator | Monday 05 January 2026 00:52:52 +0000 (0:00:00.470) 0:06:50.598 ******** 2026-01-05 00:57:21.676247 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.676252 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.676256 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.676261 | orchestrator | 2026-01-05 00:57:21.676266 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-05 00:57:21.676283 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:00.436) 0:06:51.035 ******** 2026-01-05 00:57:21.676288 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.676293 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.676298 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.676303 | orchestrator | 2026-01-05 00:57:21.676307 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-05 00:57:21.676312 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:00.270) 0:06:51.306 ******** 2026-01-05 00:57:21.676317 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.676322 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.676327 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.676331 | orchestrator | 2026-01-05 00:57:21.676336 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-05 00:57:21.676341 | orchestrator | Monday 05 January 2026 00:52:54 +0000 (0:00:00.559) 0:06:51.865 ******** 2026-01-05 00:57:21.676345 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.676350 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.676355 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.676359 | orchestrator | 2026-01-05 00:57:21.676364 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-05 00:57:21.676369 | orchestrator | Monday 05 January 2026 00:52:54 +0000 (0:00:00.325) 0:06:52.190 ******** 2026-01-05 00:57:21.676374 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 00:57:21.676378 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 00:57:21.676383 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 00:57:21.676388 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 00:57:21.676393 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 00:57:21.676404 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 00:57:21.676409 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 00:57:21.676414 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 00:57:21.676419 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 00:57:21.676424 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 00:57:21.676429 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 00:57:21.676436 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 00:57:21.676441 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 00:57:21.676446 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 00:57:21.676451 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 00:57:21.676458 | orchestrator | 2026-01-05 00:57:21.676463 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-05 00:57:21.676468 | orchestrator | Monday 05 January 2026 00:52:59 +0000 (0:00:05.326) 0:06:57.516 ******** 2026-01-05 00:57:21.676473 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.676478 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.676482 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.676487 | orchestrator | 2026-01-05 00:57:21.676492 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-05 00:57:21.676497 | orchestrator | Monday 05 January 2026 00:53:00 +0000 (0:00:00.387) 0:06:57.904 ******** 2026-01-05 00:57:21.676502 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.676507 | orchestrator | 2026-01-05 00:57:21.676511 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-05 00:57:21.676516 | orchestrator | Monday 05 January 2026 00:53:00 +0000 (0:00:00.542) 0:06:58.447 ******** 2026-01-05 00:57:21.676521 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 00:57:21.676526 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 00:57:21.676531 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 00:57:21.676535 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-05 00:57:21.676540 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-05 00:57:21.676545 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-05 00:57:21.676550 | orchestrator | 2026-01-05 00:57:21.676555 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-05 00:57:21.676560 | orchestrator | Monday 05 January 2026 00:53:02 +0000 (0:00:01.385) 0:06:59.832 ******** 2026-01-05 00:57:21.676564 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.676569 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:57:21.676574 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:57:21.676579 | orchestrator | 2026-01-05 00:57:21.676584 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:57:21.676589 | orchestrator | Monday 05 January 2026 00:53:04 +0000 (0:00:02.523) 0:07:02.356 ******** 2026-01-05 00:57:21.676593 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:57:21.676598 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:57:21.676603 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.676608 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:57:21.676613 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 00:57:21.676617 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.676622 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:57:21.676627 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 00:57:21.676632 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.676637 | orchestrator | 2026-01-05 00:57:21.676646 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-05 00:57:21.676652 | orchestrator | Monday 05 January 2026 00:53:06 +0000 (0:00:01.494) 0:07:03.850 ******** 2026-01-05 00:57:21.676656 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.676661 | orchestrator | 2026-01-05 00:57:21.676666 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-05 00:57:21.676670 | orchestrator | Monday 05 January 2026 00:53:08 +0000 (0:00:02.423) 0:07:06.273 ******** 2026-01-05 00:57:21.676675 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.676680 | orchestrator | 2026-01-05 00:57:21.676685 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-05 00:57:21.676697 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:00.541) 0:07:06.814 ******** 2026-01-05 00:57:21.676702 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1631feb6-d96c-5a43-89dd-a558edd73d68', 'data_vg': 'ceph-1631feb6-d96c-5a43-89dd-a558edd73d68'}) 2026-01-05 00:57:21.676708 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4', 'data_vg': 'ceph-f1b84f59-e4b7-5f9e-a7e5-ba7b4020d7e4'}) 2026-01-05 00:57:21.676716 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3c0354e6-1633-54b4-ae3c-130b25b2cb6c', 'data_vg': 'ceph-3c0354e6-1633-54b4-ae3c-130b25b2cb6c'}) 2026-01-05 00:57:21.676721 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c322448e-6042-58d0-bdfa-5021630018c9', 'data_vg': 'ceph-c322448e-6042-58d0-bdfa-5021630018c9'}) 2026-01-05 00:57:21.676726 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a0807b7d-156a-51e9-a1ef-1ae613918df1', 'data_vg': 'ceph-a0807b7d-156a-51e9-a1ef-1ae613918df1'}) 2026-01-05 00:57:21.676731 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7959794c-cc9c-59d9-9b66-2faefa464ed4', 'data_vg': 'ceph-7959794c-cc9c-59d9-9b66-2faefa464ed4'}) 2026-01-05 00:57:21.676736 | orchestrator | 2026-01-05 00:57:21.676741 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-05 00:57:21.676749 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:42.582) 0:07:49.397 ******** 2026-01-05 00:57:21.676754 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.676759 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.676764 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.676769 | orchestrator | 2026-01-05 00:57:21.676774 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-05 00:57:21.676778 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:00.387) 0:07:49.784 ******** 2026-01-05 00:57:21.676783 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.676788 | orchestrator | 2026-01-05 00:57:21.676793 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-05 00:57:21.676798 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:00.574) 0:07:50.359 ******** 2026-01-05 00:57:21.676803 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.676807 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.676812 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.676817 | orchestrator | 2026-01-05 00:57:21.676822 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-05 00:57:21.676827 | orchestrator | Monday 05 January 2026 00:53:53 +0000 (0:00:01.038) 0:07:51.397 ******** 2026-01-05 00:57:21.676831 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.676836 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.676841 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.676846 | orchestrator | 2026-01-05 00:57:21.676851 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-05 00:57:21.676856 | orchestrator | Monday 05 January 2026 00:53:56 +0000 (0:00:02.816) 0:07:54.214 ******** 2026-01-05 00:57:21.676860 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.676865 | orchestrator | 2026-01-05 00:57:21.676870 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-05 00:57:21.676875 | orchestrator | Monday 05 January 2026 00:53:57 +0000 (0:00:00.550) 0:07:54.764 ******** 2026-01-05 00:57:21.676880 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.676884 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.676889 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.676894 | orchestrator | 2026-01-05 00:57:21.676899 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-05 00:57:21.676904 | orchestrator | Monday 05 January 2026 00:53:58 +0000 (0:00:01.613) 0:07:56.377 ******** 2026-01-05 00:57:21.676912 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.676917 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.676922 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.676927 | orchestrator | 2026-01-05 00:57:21.676931 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-05 00:57:21.676936 | orchestrator | Monday 05 January 2026 00:53:59 +0000 (0:00:01.263) 0:07:57.641 ******** 2026-01-05 00:57:21.676941 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.676946 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.676951 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.676955 | orchestrator | 2026-01-05 00:57:21.676960 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-05 00:57:21.676965 | orchestrator | Monday 05 January 2026 00:54:02 +0000 (0:00:02.029) 0:07:59.670 ******** 2026-01-05 00:57:21.676970 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.676975 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.676979 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.676984 | orchestrator | 2026-01-05 00:57:21.676989 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-05 00:57:21.676994 | orchestrator | Monday 05 January 2026 00:54:02 +0000 (0:00:00.355) 0:08:00.025 ******** 2026-01-05 00:57:21.676999 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677004 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677008 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.677013 | orchestrator | 2026-01-05 00:57:21.677018 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-05 00:57:21.677023 | orchestrator | Monday 05 January 2026 00:54:02 +0000 (0:00:00.638) 0:08:00.664 ******** 2026-01-05 00:57:21.677028 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 00:57:21.677032 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-01-05 00:57:21.677037 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-05 00:57:21.677042 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-05 00:57:21.677047 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-05 00:57:21.677051 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-01-05 00:57:21.677056 | orchestrator | 2026-01-05 00:57:21.677061 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-05 00:57:21.677066 | orchestrator | Monday 05 January 2026 00:54:04 +0000 (0:00:01.155) 0:08:01.819 ******** 2026-01-05 00:57:21.677071 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-05 00:57:21.677075 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-05 00:57:21.677080 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-05 00:57:21.677085 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-05 00:57:21.677090 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-05 00:57:21.677097 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-05 00:57:21.677102 | orchestrator | 2026-01-05 00:57:21.677107 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-05 00:57:21.677112 | orchestrator | Monday 05 January 2026 00:54:06 +0000 (0:00:02.436) 0:08:04.256 ******** 2026-01-05 00:57:21.677117 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-05 00:57:21.677121 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-05 00:57:21.677126 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-05 00:57:21.677131 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-05 00:57:21.677136 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-05 00:57:21.677142 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-05 00:57:21.677149 | orchestrator | 2026-01-05 00:57:21.677154 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-05 00:57:21.677161 | orchestrator | Monday 05 January 2026 00:54:10 +0000 (0:00:03.801) 0:08:08.058 ******** 2026-01-05 00:57:21.677166 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677171 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677176 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.677184 | orchestrator | 2026-01-05 00:57:21.677189 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-05 00:57:21.677194 | orchestrator | Monday 05 January 2026 00:54:13 +0000 (0:00:03.290) 0:08:11.349 ******** 2026-01-05 00:57:21.677198 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677203 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677208 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-05 00:57:21.677213 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.677218 | orchestrator | 2026-01-05 00:57:21.677222 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-05 00:57:21.677227 | orchestrator | Monday 05 January 2026 00:54:26 +0000 (0:00:12.530) 0:08:23.879 ******** 2026-01-05 00:57:21.677232 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677237 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677241 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.677246 | orchestrator | 2026-01-05 00:57:21.677251 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:57:21.677256 | orchestrator | Monday 05 January 2026 00:54:27 +0000 (0:00:00.987) 0:08:24.867 ******** 2026-01-05 00:57:21.677260 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677265 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677320 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.677326 | orchestrator | 2026-01-05 00:57:21.677331 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-05 00:57:21.677336 | orchestrator | Monday 05 January 2026 00:54:27 +0000 (0:00:00.311) 0:08:25.179 ******** 2026-01-05 00:57:21.677341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.677346 | orchestrator | 2026-01-05 00:57:21.677350 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-05 00:57:21.677355 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:00.501) 0:08:25.680 ******** 2026-01-05 00:57:21.677360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.677365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.677369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.677375 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677383 | orchestrator | 2026-01-05 00:57:21.677388 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-05 00:57:21.677393 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:00.613) 0:08:26.294 ******** 2026-01-05 00:57:21.677398 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677402 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677407 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.677412 | orchestrator | 2026-01-05 00:57:21.677417 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-05 00:57:21.677421 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:00.485) 0:08:26.779 ******** 2026-01-05 00:57:21.677426 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677431 | orchestrator | 2026-01-05 00:57:21.677436 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-05 00:57:21.677440 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:00.240) 0:08:27.020 ******** 2026-01-05 00:57:21.677445 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677450 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677455 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.677459 | orchestrator | 2026-01-05 00:57:21.677464 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-05 00:57:21.677469 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:00.338) 0:08:27.358 ******** 2026-01-05 00:57:21.677474 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677482 | orchestrator | 2026-01-05 00:57:21.677487 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-05 00:57:21.677491 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:00.225) 0:08:27.584 ******** 2026-01-05 00:57:21.677496 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677501 | orchestrator | 2026-01-05 00:57:21.677506 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-05 00:57:21.677510 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.215) 0:08:27.799 ******** 2026-01-05 00:57:21.677515 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677520 | orchestrator | 2026-01-05 00:57:21.677524 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-05 00:57:21.677529 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.144) 0:08:27.943 ******** 2026-01-05 00:57:21.677534 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677539 | orchestrator | 2026-01-05 00:57:21.677547 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-05 00:57:21.677552 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.248) 0:08:28.192 ******** 2026-01-05 00:57:21.677557 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677561 | orchestrator | 2026-01-05 00:57:21.677566 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-05 00:57:21.677572 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.293) 0:08:28.486 ******** 2026-01-05 00:57:21.677580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.677589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.677601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.677610 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677617 | orchestrator | 2026-01-05 00:57:21.677625 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-05 00:57:21.677636 | orchestrator | Monday 05 January 2026 00:54:31 +0000 (0:00:00.992) 0:08:29.478 ******** 2026-01-05 00:57:21.677644 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677651 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677659 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.677666 | orchestrator | 2026-01-05 00:57:21.677674 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-05 00:57:21.677683 | orchestrator | Monday 05 January 2026 00:54:32 +0000 (0:00:00.335) 0:08:29.813 ******** 2026-01-05 00:57:21.677691 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677699 | orchestrator | 2026-01-05 00:57:21.677708 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-05 00:57:21.677713 | orchestrator | Monday 05 January 2026 00:54:32 +0000 (0:00:00.236) 0:08:30.050 ******** 2026-01-05 00:57:21.677718 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677722 | orchestrator | 2026-01-05 00:57:21.677727 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-05 00:57:21.677732 | orchestrator | 2026-01-05 00:57:21.677737 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:57:21.677741 | orchestrator | Monday 05 January 2026 00:54:33 +0000 (0:00:00.698) 0:08:30.749 ******** 2026-01-05 00:57:21.677746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.677752 | orchestrator | 2026-01-05 00:57:21.677756 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:57:21.677761 | orchestrator | Monday 05 January 2026 00:54:34 +0000 (0:00:01.296) 0:08:32.045 ******** 2026-01-05 00:57:21.677766 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.677771 | orchestrator | 2026-01-05 00:57:21.677776 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:57:21.677787 | orchestrator | Monday 05 January 2026 00:54:35 +0000 (0:00:01.433) 0:08:33.479 ******** 2026-01-05 00:57:21.677791 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677796 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677801 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.677806 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.677811 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.677815 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.677820 | orchestrator | 2026-01-05 00:57:21.677825 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:57:21.677830 | orchestrator | Monday 05 January 2026 00:54:37 +0000 (0:00:01.367) 0:08:34.846 ******** 2026-01-05 00:57:21.677835 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.677840 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.677844 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.677849 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.677854 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.677859 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.677864 | orchestrator | 2026-01-05 00:57:21.677869 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:57:21.677873 | orchestrator | Monday 05 January 2026 00:54:37 +0000 (0:00:00.717) 0:08:35.564 ******** 2026-01-05 00:57:21.677878 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.677883 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.677888 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.677893 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.677898 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.677902 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.677907 | orchestrator | 2026-01-05 00:57:21.677912 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:57:21.677917 | orchestrator | Monday 05 January 2026 00:54:38 +0000 (0:00:01.085) 0:08:36.649 ******** 2026-01-05 00:57:21.677922 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.677926 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.677931 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.677936 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.677941 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.677946 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.677951 | orchestrator | 2026-01-05 00:57:21.677956 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:57:21.677961 | orchestrator | Monday 05 January 2026 00:54:39 +0000 (0:00:00.700) 0:08:37.349 ******** 2026-01-05 00:57:21.677965 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.677970 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.677975 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.677980 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.677985 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.677990 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.677994 | orchestrator | 2026-01-05 00:57:21.677999 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:57:21.678004 | orchestrator | Monday 05 January 2026 00:54:40 +0000 (0:00:01.304) 0:08:38.654 ******** 2026-01-05 00:57:21.678009 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.678041 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.678050 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.678055 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.678060 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.678065 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.678070 | orchestrator | 2026-01-05 00:57:21.678075 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:57:21.678080 | orchestrator | Monday 05 January 2026 00:54:41 +0000 (0:00:00.635) 0:08:39.289 ******** 2026-01-05 00:57:21.678085 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.678090 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.678098 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.678104 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.678112 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.678117 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.678122 | orchestrator | 2026-01-05 00:57:21.678127 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:57:21.678135 | orchestrator | Monday 05 January 2026 00:54:42 +0000 (0:00:00.943) 0:08:40.233 ******** 2026-01-05 00:57:21.678140 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.678144 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.678149 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.678154 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.678159 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.678163 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.678168 | orchestrator | 2026-01-05 00:57:21.678173 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:57:21.678178 | orchestrator | Monday 05 January 2026 00:54:43 +0000 (0:00:01.289) 0:08:41.523 ******** 2026-01-05 00:57:21.678182 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.678187 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.678192 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.678197 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.678201 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.678206 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.678211 | orchestrator | 2026-01-05 00:57:21.678215 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:57:21.678220 | orchestrator | Monday 05 January 2026 00:54:45 +0000 (0:00:01.404) 0:08:42.928 ******** 2026-01-05 00:57:21.678225 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.678230 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.678235 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.678239 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.678244 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.678249 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.678253 | orchestrator | 2026-01-05 00:57:21.678258 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:57:21.678263 | orchestrator | Monday 05 January 2026 00:54:45 +0000 (0:00:00.701) 0:08:43.629 ******** 2026-01-05 00:57:21.678268 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.678306 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.678311 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.678316 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.678321 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.678326 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.678330 | orchestrator | 2026-01-05 00:57:21.678335 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:57:21.678340 | orchestrator | Monday 05 January 2026 00:54:46 +0000 (0:00:00.979) 0:08:44.609 ******** 2026-01-05 00:57:21.678345 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.678350 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.678356 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.678364 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.678369 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.678374 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.678378 | orchestrator | 2026-01-05 00:57:21.678383 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:57:21.678388 | orchestrator | Monday 05 January 2026 00:54:47 +0000 (0:00:00.610) 0:08:45.219 ******** 2026-01-05 00:57:21.678393 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.678397 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.678402 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.678407 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.678412 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.678416 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.678425 | orchestrator | 2026-01-05 00:57:21.678429 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:57:21.678434 | orchestrator | Monday 05 January 2026 00:54:48 +0000 (0:00:00.941) 0:08:46.161 ******** 2026-01-05 00:57:21.678439 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.678444 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.678448 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.678452 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.678457 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.678462 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.678466 | orchestrator | 2026-01-05 00:57:21.678471 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:57:21.678475 | orchestrator | Monday 05 January 2026 00:54:49 +0000 (0:00:00.641) 0:08:46.802 ******** 2026-01-05 00:57:21.678480 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.678484 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.678489 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.678493 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.678498 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.678502 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.678506 | orchestrator | 2026-01-05 00:57:21.678511 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:57:21.678516 | orchestrator | Monday 05 January 2026 00:54:50 +0000 (0:00:00.894) 0:08:47.697 ******** 2026-01-05 00:57:21.678520 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.678525 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.678529 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.678533 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.678538 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.678542 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.678547 | orchestrator | 2026-01-05 00:57:21.678551 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:57:21.678556 | orchestrator | Monday 05 January 2026 00:54:50 +0000 (0:00:00.713) 0:08:48.411 ******** 2026-01-05 00:57:21.678563 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.678575 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.678582 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.678590 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.678597 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.678606 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.678614 | orchestrator | 2026-01-05 00:57:21.678622 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:57:21.678628 | orchestrator | Monday 05 January 2026 00:54:51 +0000 (0:00:00.884) 0:08:49.296 ******** 2026-01-05 00:57:21.678633 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.678638 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.678642 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.678646 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.678651 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.678655 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.678660 | orchestrator | 2026-01-05 00:57:21.678664 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:57:21.678672 | orchestrator | Monday 05 January 2026 00:54:52 +0000 (0:00:00.658) 0:08:49.954 ******** 2026-01-05 00:57:21.678677 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.678681 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.678686 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.678690 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.678694 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.678699 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.678703 | orchestrator | 2026-01-05 00:57:21.678708 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-05 00:57:21.678712 | orchestrator | Monday 05 January 2026 00:54:53 +0000 (0:00:01.389) 0:08:51.344 ******** 2026-01-05 00:57:21.678717 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.678725 | orchestrator | 2026-01-05 00:57:21.678730 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-05 00:57:21.678735 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:04.231) 0:08:55.576 ******** 2026-01-05 00:57:21.678743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.678750 | orchestrator | 2026-01-05 00:57:21.678757 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-05 00:57:21.678764 | orchestrator | Monday 05 January 2026 00:55:00 +0000 (0:00:02.154) 0:08:57.730 ******** 2026-01-05 00:57:21.678771 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.678778 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.678785 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.678793 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.678800 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.678808 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.678816 | orchestrator | 2026-01-05 00:57:21.678824 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-05 00:57:21.678832 | orchestrator | Monday 05 January 2026 00:55:01 +0000 (0:00:01.843) 0:08:59.574 ******** 2026-01-05 00:57:21.678837 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.678841 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.678846 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.678850 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.678855 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.678859 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.678864 | orchestrator | 2026-01-05 00:57:21.678868 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-05 00:57:21.678873 | orchestrator | Monday 05 January 2026 00:55:02 +0000 (0:00:01.083) 0:09:00.657 ******** 2026-01-05 00:57:21.678877 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.678883 | orchestrator | 2026-01-05 00:57:21.678887 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-05 00:57:21.678892 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:01.335) 0:09:01.993 ******** 2026-01-05 00:57:21.678897 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.678901 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.678906 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.678910 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.678914 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.678919 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.678923 | orchestrator | 2026-01-05 00:57:21.678928 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-05 00:57:21.678932 | orchestrator | Monday 05 January 2026 00:55:06 +0000 (0:00:02.146) 0:09:04.140 ******** 2026-01-05 00:57:21.678937 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.678941 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.678946 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.678950 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.678955 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.678959 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.678964 | orchestrator | 2026-01-05 00:57:21.678968 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-05 00:57:21.678973 | orchestrator | Monday 05 January 2026 00:55:10 +0000 (0:00:03.930) 0:09:08.070 ******** 2026-01-05 00:57:21.678978 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:21.678982 | orchestrator | 2026-01-05 00:57:21.678987 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-05 00:57:21.678991 | orchestrator | Monday 05 January 2026 00:55:11 +0000 (0:00:01.382) 0:09:09.453 ******** 2026-01-05 00:57:21.678999 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679004 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679008 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679013 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.679017 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.679022 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.679026 | orchestrator | 2026-01-05 00:57:21.679031 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-05 00:57:21.679036 | orchestrator | Monday 05 January 2026 00:55:12 +0000 (0:00:00.939) 0:09:10.392 ******** 2026-01-05 00:57:21.679040 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.679048 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.679053 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.679057 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:21.679062 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:21.679066 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:21.679071 | orchestrator | 2026-01-05 00:57:21.679075 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-05 00:57:21.679080 | orchestrator | Monday 05 January 2026 00:55:14 +0000 (0:00:02.053) 0:09:12.446 ******** 2026-01-05 00:57:21.679084 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679089 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679093 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679098 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:21.679102 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:21.679107 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:21.679111 | orchestrator | 2026-01-05 00:57:21.679140 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-05 00:57:21.679146 | orchestrator | 2026-01-05 00:57:21.679150 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:57:21.679155 | orchestrator | Monday 05 January 2026 00:55:15 +0000 (0:00:01.207) 0:09:13.653 ******** 2026-01-05 00:57:21.679160 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.679165 | orchestrator | 2026-01-05 00:57:21.679169 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:57:21.679174 | orchestrator | Monday 05 January 2026 00:55:16 +0000 (0:00:00.529) 0:09:14.183 ******** 2026-01-05 00:57:21.679179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.679183 | orchestrator | 2026-01-05 00:57:21.679188 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:57:21.679193 | orchestrator | Monday 05 January 2026 00:55:17 +0000 (0:00:00.787) 0:09:14.970 ******** 2026-01-05 00:57:21.679197 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679202 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679206 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679211 | orchestrator | 2026-01-05 00:57:21.679215 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:57:21.679220 | orchestrator | Monday 05 January 2026 00:55:17 +0000 (0:00:00.316) 0:09:15.287 ******** 2026-01-05 00:57:21.679225 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679229 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679234 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679238 | orchestrator | 2026-01-05 00:57:21.679243 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:57:21.679248 | orchestrator | Monday 05 January 2026 00:55:18 +0000 (0:00:00.709) 0:09:15.996 ******** 2026-01-05 00:57:21.679252 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679257 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679261 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679266 | orchestrator | 2026-01-05 00:57:21.679281 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:57:21.679286 | orchestrator | Monday 05 January 2026 00:55:19 +0000 (0:00:01.006) 0:09:17.003 ******** 2026-01-05 00:57:21.679297 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679301 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679306 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679310 | orchestrator | 2026-01-05 00:57:21.679315 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:57:21.679319 | orchestrator | Monday 05 January 2026 00:55:20 +0000 (0:00:00.710) 0:09:17.713 ******** 2026-01-05 00:57:21.679324 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679328 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679333 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679337 | orchestrator | 2026-01-05 00:57:21.679342 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:57:21.679346 | orchestrator | Monday 05 January 2026 00:55:20 +0000 (0:00:00.442) 0:09:18.155 ******** 2026-01-05 00:57:21.679351 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679356 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679360 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679365 | orchestrator | 2026-01-05 00:57:21.679369 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:57:21.679374 | orchestrator | Monday 05 January 2026 00:55:20 +0000 (0:00:00.363) 0:09:18.519 ******** 2026-01-05 00:57:21.679378 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679384 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679391 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679398 | orchestrator | 2026-01-05 00:57:21.679405 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:57:21.679413 | orchestrator | Monday 05 January 2026 00:55:21 +0000 (0:00:00.583) 0:09:19.102 ******** 2026-01-05 00:57:21.679420 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679426 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679431 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679436 | orchestrator | 2026-01-05 00:57:21.679440 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:57:21.679445 | orchestrator | Monday 05 January 2026 00:55:22 +0000 (0:00:00.785) 0:09:19.887 ******** 2026-01-05 00:57:21.679449 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679454 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679458 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679463 | orchestrator | 2026-01-05 00:57:21.679467 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:57:21.679472 | orchestrator | Monday 05 January 2026 00:55:23 +0000 (0:00:00.859) 0:09:20.747 ******** 2026-01-05 00:57:21.679477 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679485 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679490 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679495 | orchestrator | 2026-01-05 00:57:21.679499 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:57:21.679504 | orchestrator | Monday 05 January 2026 00:55:23 +0000 (0:00:00.349) 0:09:21.096 ******** 2026-01-05 00:57:21.679509 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679516 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679521 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679526 | orchestrator | 2026-01-05 00:57:21.679530 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:57:21.679535 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:00.615) 0:09:21.712 ******** 2026-01-05 00:57:21.679540 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679544 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679549 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679553 | orchestrator | 2026-01-05 00:57:21.679558 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:57:21.679563 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:00.361) 0:09:22.073 ******** 2026-01-05 00:57:21.679567 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679576 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679580 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679585 | orchestrator | 2026-01-05 00:57:21.679592 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:57:21.679597 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:00.354) 0:09:22.428 ******** 2026-01-05 00:57:21.679602 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679606 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679611 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679615 | orchestrator | 2026-01-05 00:57:21.679620 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:57:21.679624 | orchestrator | Monday 05 January 2026 00:55:25 +0000 (0:00:00.349) 0:09:22.777 ******** 2026-01-05 00:57:21.679629 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679633 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679638 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679643 | orchestrator | 2026-01-05 00:57:21.679647 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:57:21.679652 | orchestrator | Monday 05 January 2026 00:55:25 +0000 (0:00:00.619) 0:09:23.396 ******** 2026-01-05 00:57:21.679656 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679661 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679665 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679670 | orchestrator | 2026-01-05 00:57:21.679675 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:57:21.679679 | orchestrator | Monday 05 January 2026 00:55:26 +0000 (0:00:00.355) 0:09:23.752 ******** 2026-01-05 00:57:21.679684 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679688 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679693 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679697 | orchestrator | 2026-01-05 00:57:21.679702 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:57:21.679706 | orchestrator | Monday 05 January 2026 00:55:26 +0000 (0:00:00.320) 0:09:24.072 ******** 2026-01-05 00:57:21.679711 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679716 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679720 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679725 | orchestrator | 2026-01-05 00:57:21.679729 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:57:21.679734 | orchestrator | Monday 05 January 2026 00:55:26 +0000 (0:00:00.399) 0:09:24.471 ******** 2026-01-05 00:57:21.679738 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.679743 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.679749 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.679756 | orchestrator | 2026-01-05 00:57:21.679761 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-05 00:57:21.679765 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:00.886) 0:09:25.358 ******** 2026-01-05 00:57:21.679770 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.679774 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.679781 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-05 00:57:21.679789 | orchestrator | 2026-01-05 00:57:21.679796 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-05 00:57:21.679802 | orchestrator | Monday 05 January 2026 00:55:28 +0000 (0:00:00.412) 0:09:25.770 ******** 2026-01-05 00:57:21.679811 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.679821 | orchestrator | 2026-01-05 00:57:21.679828 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-05 00:57:21.679835 | orchestrator | Monday 05 January 2026 00:55:30 +0000 (0:00:02.236) 0:09:28.006 ******** 2026-01-05 00:57:21.679844 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-05 00:57:21.679859 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.679866 | orchestrator | 2026-01-05 00:57:21.679874 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-05 00:57:21.679883 | orchestrator | Monday 05 January 2026 00:55:30 +0000 (0:00:00.229) 0:09:28.236 ******** 2026-01-05 00:57:21.679893 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:57:21.679906 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:57:21.679911 | orchestrator | 2026-01-05 00:57:21.679915 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-05 00:57:21.679920 | orchestrator | Monday 05 January 2026 00:55:39 +0000 (0:00:08.680) 0:09:36.916 ******** 2026-01-05 00:57:21.679928 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:57:21.679933 | orchestrator | 2026-01-05 00:57:21.679938 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-05 00:57:21.679942 | orchestrator | Monday 05 January 2026 00:55:43 +0000 (0:00:03.861) 0:09:40.778 ******** 2026-01-05 00:57:21.679947 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.679952 | orchestrator | 2026-01-05 00:57:21.679956 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-05 00:57:21.679961 | orchestrator | Monday 05 January 2026 00:55:43 +0000 (0:00:00.520) 0:09:41.299 ******** 2026-01-05 00:57:21.679965 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 00:57:21.679972 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 00:57:21.679977 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 00:57:21.679981 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-05 00:57:21.679986 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-05 00:57:21.679991 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-05 00:57:21.679995 | orchestrator | 2026-01-05 00:57:21.680000 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-05 00:57:21.680004 | orchestrator | Monday 05 January 2026 00:55:44 +0000 (0:00:01.040) 0:09:42.339 ******** 2026-01-05 00:57:21.680009 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.680013 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:57:21.680018 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:57:21.680022 | orchestrator | 2026-01-05 00:57:21.680027 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:57:21.680032 | orchestrator | Monday 05 January 2026 00:55:47 +0000 (0:00:02.503) 0:09:44.843 ******** 2026-01-05 00:57:21.680036 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:57:21.680041 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:57:21.680045 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.680050 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:57:21.680054 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 00:57:21.680059 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.680063 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:57:21.680070 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 00:57:21.680078 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.680087 | orchestrator | 2026-01-05 00:57:21.680092 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-05 00:57:21.680096 | orchestrator | Monday 05 January 2026 00:55:48 +0000 (0:00:01.709) 0:09:46.552 ******** 2026-01-05 00:57:21.680101 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.680105 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.680110 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.680114 | orchestrator | 2026-01-05 00:57:21.680119 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-05 00:57:21.680123 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:02.742) 0:09:49.295 ******** 2026-01-05 00:57:21.680128 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680132 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.680137 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.680141 | orchestrator | 2026-01-05 00:57:21.680146 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-05 00:57:21.680150 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:00.332) 0:09:49.628 ******** 2026-01-05 00:57:21.680155 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.680160 | orchestrator | 2026-01-05 00:57:21.680164 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-05 00:57:21.680169 | orchestrator | Monday 05 January 2026 00:55:52 +0000 (0:00:00.828) 0:09:50.456 ******** 2026-01-05 00:57:21.680173 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.680178 | orchestrator | 2026-01-05 00:57:21.680182 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-05 00:57:21.680187 | orchestrator | Monday 05 January 2026 00:55:53 +0000 (0:00:00.548) 0:09:51.005 ******** 2026-01-05 00:57:21.680191 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.680196 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.680200 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.680205 | orchestrator | 2026-01-05 00:57:21.680210 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-05 00:57:21.680214 | orchestrator | Monday 05 January 2026 00:55:54 +0000 (0:00:01.416) 0:09:52.422 ******** 2026-01-05 00:57:21.680219 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.680223 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.680228 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.680232 | orchestrator | 2026-01-05 00:57:21.680237 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-05 00:57:21.680244 | orchestrator | Monday 05 January 2026 00:55:56 +0000 (0:00:01.698) 0:09:54.120 ******** 2026-01-05 00:57:21.680249 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.680253 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.680258 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.680262 | orchestrator | 2026-01-05 00:57:21.680267 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-05 00:57:21.680286 | orchestrator | Monday 05 January 2026 00:55:58 +0000 (0:00:02.169) 0:09:56.290 ******** 2026-01-05 00:57:21.680291 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.680298 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.680303 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.680308 | orchestrator | 2026-01-05 00:57:21.680312 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-05 00:57:21.680317 | orchestrator | Monday 05 January 2026 00:56:00 +0000 (0:00:02.198) 0:09:58.488 ******** 2026-01-05 00:57:21.680321 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680326 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680330 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680335 | orchestrator | 2026-01-05 00:57:21.680340 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:57:21.680347 | orchestrator | Monday 05 January 2026 00:56:02 +0000 (0:00:01.602) 0:10:00.091 ******** 2026-01-05 00:57:21.680352 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.680356 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.680361 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.680366 | orchestrator | 2026-01-05 00:57:21.680376 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-05 00:57:21.680384 | orchestrator | Monday 05 January 2026 00:56:03 +0000 (0:00:00.720) 0:10:00.812 ******** 2026-01-05 00:57:21.680389 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.680394 | orchestrator | 2026-01-05 00:57:21.680398 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-05 00:57:21.680403 | orchestrator | Monday 05 January 2026 00:56:04 +0000 (0:00:00.991) 0:10:01.803 ******** 2026-01-05 00:57:21.680407 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680412 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680416 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680421 | orchestrator | 2026-01-05 00:57:21.680425 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-05 00:57:21.680430 | orchestrator | Monday 05 January 2026 00:56:04 +0000 (0:00:00.368) 0:10:02.172 ******** 2026-01-05 00:57:21.680434 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.680439 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.680443 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.680448 | orchestrator | 2026-01-05 00:57:21.680452 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-05 00:57:21.680457 | orchestrator | Monday 05 January 2026 00:56:05 +0000 (0:00:01.294) 0:10:03.466 ******** 2026-01-05 00:57:21.680461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.680466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.680470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.680475 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680479 | orchestrator | 2026-01-05 00:57:21.680484 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-05 00:57:21.680488 | orchestrator | Monday 05 January 2026 00:56:06 +0000 (0:00:00.819) 0:10:04.285 ******** 2026-01-05 00:57:21.680493 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680497 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680502 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680506 | orchestrator | 2026-01-05 00:57:21.680511 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-05 00:57:21.680515 | orchestrator | 2026-01-05 00:57:21.680520 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:57:21.680525 | orchestrator | Monday 05 January 2026 00:56:07 +0000 (0:00:00.757) 0:10:05.043 ******** 2026-01-05 00:57:21.680532 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.680538 | orchestrator | 2026-01-05 00:57:21.680542 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:57:21.680547 | orchestrator | Monday 05 January 2026 00:56:07 +0000 (0:00:00.435) 0:10:05.478 ******** 2026-01-05 00:57:21.680552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.680556 | orchestrator | 2026-01-05 00:57:21.680561 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:57:21.680565 | orchestrator | Monday 05 January 2026 00:56:08 +0000 (0:00:00.676) 0:10:06.155 ******** 2026-01-05 00:57:21.680570 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680574 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.680579 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.680583 | orchestrator | 2026-01-05 00:57:21.680588 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:57:21.680595 | orchestrator | Monday 05 January 2026 00:56:08 +0000 (0:00:00.444) 0:10:06.600 ******** 2026-01-05 00:57:21.680600 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680604 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680609 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680613 | orchestrator | 2026-01-05 00:57:21.680618 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:57:21.680622 | orchestrator | Monday 05 January 2026 00:56:10 +0000 (0:00:01.206) 0:10:07.806 ******** 2026-01-05 00:57:21.680627 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680631 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680636 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680640 | orchestrator | 2026-01-05 00:57:21.680645 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:57:21.680650 | orchestrator | Monday 05 January 2026 00:56:10 +0000 (0:00:00.812) 0:10:08.619 ******** 2026-01-05 00:57:21.680654 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680659 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680663 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680668 | orchestrator | 2026-01-05 00:57:21.680672 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:57:21.680677 | orchestrator | Monday 05 January 2026 00:56:12 +0000 (0:00:01.113) 0:10:09.732 ******** 2026-01-05 00:57:21.680681 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680686 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.680690 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.680695 | orchestrator | 2026-01-05 00:57:21.680702 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:57:21.680707 | orchestrator | Monday 05 January 2026 00:56:12 +0000 (0:00:00.319) 0:10:10.052 ******** 2026-01-05 00:57:21.680711 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680716 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.680720 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.680725 | orchestrator | 2026-01-05 00:57:21.680730 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:57:21.680734 | orchestrator | Monday 05 January 2026 00:56:12 +0000 (0:00:00.295) 0:10:10.347 ******** 2026-01-05 00:57:21.680739 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680743 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.680748 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.680752 | orchestrator | 2026-01-05 00:57:21.680757 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:57:21.680764 | orchestrator | Monday 05 January 2026 00:56:13 +0000 (0:00:00.345) 0:10:10.693 ******** 2026-01-05 00:57:21.680769 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680773 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680778 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680782 | orchestrator | 2026-01-05 00:57:21.680787 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:57:21.680792 | orchestrator | Monday 05 January 2026 00:56:14 +0000 (0:00:01.120) 0:10:11.814 ******** 2026-01-05 00:57:21.680796 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680801 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680805 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680810 | orchestrator | 2026-01-05 00:57:21.680815 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:57:21.680819 | orchestrator | Monday 05 January 2026 00:56:14 +0000 (0:00:00.796) 0:10:12.611 ******** 2026-01-05 00:57:21.680824 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680828 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.680833 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.680837 | orchestrator | 2026-01-05 00:57:21.680842 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:57:21.680847 | orchestrator | Monday 05 January 2026 00:56:15 +0000 (0:00:00.333) 0:10:12.944 ******** 2026-01-05 00:57:21.680854 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680859 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.680863 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.680868 | orchestrator | 2026-01-05 00:57:21.680873 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:57:21.680877 | orchestrator | Monday 05 January 2026 00:56:15 +0000 (0:00:00.365) 0:10:13.310 ******** 2026-01-05 00:57:21.680882 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680886 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680892 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680900 | orchestrator | 2026-01-05 00:57:21.680905 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:57:21.680909 | orchestrator | Monday 05 January 2026 00:56:16 +0000 (0:00:00.786) 0:10:14.097 ******** 2026-01-05 00:57:21.680914 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680918 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680923 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680927 | orchestrator | 2026-01-05 00:57:21.680932 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:57:21.680936 | orchestrator | Monday 05 January 2026 00:56:16 +0000 (0:00:00.472) 0:10:14.570 ******** 2026-01-05 00:57:21.680941 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.680945 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.680950 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.680954 | orchestrator | 2026-01-05 00:57:21.680959 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:57:21.680963 | orchestrator | Monday 05 January 2026 00:56:17 +0000 (0:00:00.322) 0:10:14.892 ******** 2026-01-05 00:57:21.680968 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680972 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.680977 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.680981 | orchestrator | 2026-01-05 00:57:21.680986 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:57:21.680990 | orchestrator | Monday 05 January 2026 00:56:17 +0000 (0:00:00.383) 0:10:15.275 ******** 2026-01-05 00:57:21.680995 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.680999 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.681004 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.681008 | orchestrator | 2026-01-05 00:57:21.681013 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:57:21.681018 | orchestrator | Monday 05 January 2026 00:56:18 +0000 (0:00:00.452) 0:10:15.727 ******** 2026-01-05 00:57:21.681022 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681027 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.681031 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.681036 | orchestrator | 2026-01-05 00:57:21.681040 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:57:21.681045 | orchestrator | Monday 05 January 2026 00:56:18 +0000 (0:00:00.288) 0:10:16.016 ******** 2026-01-05 00:57:21.681049 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.681054 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.681058 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.681063 | orchestrator | 2026-01-05 00:57:21.681067 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:57:21.681072 | orchestrator | Monday 05 January 2026 00:56:18 +0000 (0:00:00.366) 0:10:16.383 ******** 2026-01-05 00:57:21.681076 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.681081 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.681085 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.681090 | orchestrator | 2026-01-05 00:57:21.681095 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-05 00:57:21.681099 | orchestrator | Monday 05 January 2026 00:56:19 +0000 (0:00:00.612) 0:10:16.995 ******** 2026-01-05 00:57:21.681104 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.681111 | orchestrator | 2026-01-05 00:57:21.681117 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-05 00:57:21.681127 | orchestrator | Monday 05 January 2026 00:56:19 +0000 (0:00:00.610) 0:10:17.606 ******** 2026-01-05 00:57:21.681131 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.681136 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:57:21.681141 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:57:21.681145 | orchestrator | 2026-01-05 00:57:21.681150 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:57:21.681154 | orchestrator | Monday 05 January 2026 00:56:22 +0000 (0:00:02.196) 0:10:19.802 ******** 2026-01-05 00:57:21.681159 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:57:21.681163 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:57:21.681168 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:57:21.681175 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 00:57:21.681179 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.681184 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.681188 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:57:21.681193 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 00:57:21.681197 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.681202 | orchestrator | 2026-01-05 00:57:21.681206 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-05 00:57:21.681211 | orchestrator | Monday 05 January 2026 00:56:23 +0000 (0:00:01.643) 0:10:21.445 ******** 2026-01-05 00:57:21.681215 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681220 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.681225 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.681229 | orchestrator | 2026-01-05 00:57:21.681234 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-05 00:57:21.681238 | orchestrator | Monday 05 January 2026 00:56:24 +0000 (0:00:00.416) 0:10:21.861 ******** 2026-01-05 00:57:21.681243 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.681247 | orchestrator | 2026-01-05 00:57:21.681252 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-05 00:57:21.681256 | orchestrator | Monday 05 January 2026 00:56:24 +0000 (0:00:00.630) 0:10:22.492 ******** 2026-01-05 00:57:21.681261 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.681266 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.681299 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.681304 | orchestrator | 2026-01-05 00:57:21.681309 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-05 00:57:21.681313 | orchestrator | Monday 05 January 2026 00:56:26 +0000 (0:00:01.343) 0:10:23.835 ******** 2026-01-05 00:57:21.681318 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.681323 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 00:57:21.681327 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.681332 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 00:57:21.681336 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.681344 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 00:57:21.681349 | orchestrator | 2026-01-05 00:57:21.681353 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-05 00:57:21.681358 | orchestrator | Monday 05 January 2026 00:56:30 +0000 (0:00:04.351) 0:10:28.187 ******** 2026-01-05 00:57:21.681363 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.681367 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:57:21.681372 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.681376 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:57:21.681381 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:57:21.681385 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:57:21.681390 | orchestrator | 2026-01-05 00:57:21.681395 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:57:21.681399 | orchestrator | Monday 05 January 2026 00:56:32 +0000 (0:00:02.428) 0:10:30.616 ******** 2026-01-05 00:57:21.681404 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:57:21.681408 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.681413 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:57:21.681417 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.681425 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:57:21.681431 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.681436 | orchestrator | 2026-01-05 00:57:21.681440 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-05 00:57:21.681448 | orchestrator | Monday 05 January 2026 00:56:34 +0000 (0:00:01.298) 0:10:31.915 ******** 2026-01-05 00:57:21.681452 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-05 00:57:21.681457 | orchestrator | 2026-01-05 00:57:21.681462 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-05 00:57:21.681466 | orchestrator | Monday 05 January 2026 00:56:34 +0000 (0:00:00.235) 0:10:32.150 ******** 2026-01-05 00:57:21.681471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681498 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681502 | orchestrator | 2026-01-05 00:57:21.681507 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-05 00:57:21.681512 | orchestrator | Monday 05 January 2026 00:56:35 +0000 (0:00:01.269) 0:10:33.420 ******** 2026-01-05 00:57:21.681516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:57:21.681554 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681559 | orchestrator | 2026-01-05 00:57:21.681564 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-05 00:57:21.681568 | orchestrator | Monday 05 January 2026 00:56:36 +0000 (0:00:00.659) 0:10:34.079 ******** 2026-01-05 00:57:21.681573 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:57:21.681578 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:57:21.681582 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:57:21.681587 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:57:21.681591 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:57:21.681596 | orchestrator | 2026-01-05 00:57:21.681601 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-05 00:57:21.681605 | orchestrator | Monday 05 January 2026 00:57:07 +0000 (0:00:31.189) 0:11:05.269 ******** 2026-01-05 00:57:21.681610 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681614 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.681619 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.681623 | orchestrator | 2026-01-05 00:57:21.681628 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-05 00:57:21.681633 | orchestrator | Monday 05 January 2026 00:57:07 +0000 (0:00:00.326) 0:11:05.595 ******** 2026-01-05 00:57:21.681637 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681642 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.681646 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.681651 | orchestrator | 2026-01-05 00:57:21.681655 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-05 00:57:21.681660 | orchestrator | Monday 05 January 2026 00:57:08 +0000 (0:00:00.354) 0:11:05.950 ******** 2026-01-05 00:57:21.681664 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.681669 | orchestrator | 2026-01-05 00:57:21.681674 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-05 00:57:21.681678 | orchestrator | Monday 05 January 2026 00:57:09 +0000 (0:00:00.846) 0:11:06.796 ******** 2026-01-05 00:57:21.681683 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.681687 | orchestrator | 2026-01-05 00:57:21.681695 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-05 00:57:21.681699 | orchestrator | Monday 05 January 2026 00:57:09 +0000 (0:00:00.559) 0:11:07.356 ******** 2026-01-05 00:57:21.681705 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.681713 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.681718 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.681722 | orchestrator | 2026-01-05 00:57:21.681727 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-05 00:57:21.681731 | orchestrator | Monday 05 January 2026 00:57:10 +0000 (0:00:01.261) 0:11:08.617 ******** 2026-01-05 00:57:21.681736 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.681743 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.681748 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.681752 | orchestrator | 2026-01-05 00:57:21.681757 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-05 00:57:21.681764 | orchestrator | Monday 05 January 2026 00:57:12 +0000 (0:00:01.483) 0:11:10.101 ******** 2026-01-05 00:57:21.681769 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:57:21.681773 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:57:21.681778 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:57:21.681782 | orchestrator | 2026-01-05 00:57:21.681787 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-05 00:57:21.681791 | orchestrator | Monday 05 January 2026 00:57:14 +0000 (0:00:02.045) 0:11:12.147 ******** 2026-01-05 00:57:21.681796 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.681800 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.681805 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:57:21.681810 | orchestrator | 2026-01-05 00:57:21.681814 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:57:21.681819 | orchestrator | Monday 05 January 2026 00:57:17 +0000 (0:00:02.911) 0:11:15.058 ******** 2026-01-05 00:57:21.681823 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681828 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.681833 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.681837 | orchestrator | 2026-01-05 00:57:21.681842 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-05 00:57:21.681846 | orchestrator | Monday 05 January 2026 00:57:17 +0000 (0:00:00.386) 0:11:15.444 ******** 2026-01-05 00:57:21.681851 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:57:21.681855 | orchestrator | 2026-01-05 00:57:21.681860 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-05 00:57:21.681864 | orchestrator | Monday 05 January 2026 00:57:18 +0000 (0:00:00.548) 0:11:15.993 ******** 2026-01-05 00:57:21.681869 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.681873 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.681877 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.681881 | orchestrator | 2026-01-05 00:57:21.681885 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-05 00:57:21.681899 | orchestrator | Monday 05 January 2026 00:57:18 +0000 (0:00:00.625) 0:11:16.618 ******** 2026-01-05 00:57:21.681904 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681908 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:57:21.681912 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:57:21.681916 | orchestrator | 2026-01-05 00:57:21.681920 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-05 00:57:21.681925 | orchestrator | Monday 05 January 2026 00:57:19 +0000 (0:00:00.362) 0:11:16.980 ******** 2026-01-05 00:57:21.681929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:57:21.681933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:57:21.681937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:57:21.681941 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:57:21.681945 | orchestrator | 2026-01-05 00:57:21.681949 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-05 00:57:21.681953 | orchestrator | Monday 05 January 2026 00:57:19 +0000 (0:00:00.676) 0:11:17.657 ******** 2026-01-05 00:57:21.681957 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:57:21.681962 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:57:21.681966 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:57:21.681972 | orchestrator | 2026-01-05 00:57:21.681977 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:57:21.681981 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-05 00:57:21.681985 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-05 00:57:21.681989 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-05 00:57:21.681993 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-05 00:57:21.681998 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-05 00:57:21.682008 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-05 00:57:21.682035 | orchestrator | 2026-01-05 00:57:21.682040 | orchestrator | 2026-01-05 00:57:21.682044 | orchestrator | 2026-01-05 00:57:21.682048 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:57:21.682052 | orchestrator | Monday 05 January 2026 00:57:20 +0000 (0:00:00.261) 0:11:17.918 ******** 2026-01-05 00:57:21.682062 | orchestrator | =============================================================================== 2026-01-05 00:57:21.682067 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 56.64s 2026-01-05 00:57:21.682071 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.58s 2026-01-05 00:57:21.682075 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 35.96s 2026-01-05 00:57:21.682079 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.19s 2026-01-05 00:57:21.682114 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.40s 2026-01-05 00:57:21.682119 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.53s 2026-01-05 00:57:21.682123 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.48s 2026-01-05 00:57:21.682128 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.36s 2026-01-05 00:57:21.682132 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.68s 2026-01-05 00:57:21.682136 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.94s 2026-01-05 00:57:21.682140 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.59s 2026-01-05 00:57:21.682144 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.33s 2026-01-05 00:57:21.682148 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.26s 2026-01-05 00:57:21.682152 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.35s 2026-01-05 00:57:21.682157 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.23s 2026-01-05 00:57:21.682161 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.93s 2026-01-05 00:57:21.682165 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.86s 2026-01-05 00:57:21.682169 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.80s 2026-01-05 00:57:21.682173 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.80s 2026-01-05 00:57:21.682177 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.70s 2026-01-05 00:57:21.682181 | orchestrator | 2026-01-05 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:24.725683 | orchestrator | 2026-01-05 00:57:24 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:24.728545 | orchestrator | 2026-01-05 00:57:24 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:24.731060 | orchestrator | 2026-01-05 00:57:24 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:24.731326 | orchestrator | 2026-01-05 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:27.785888 | orchestrator | 2026-01-05 00:57:27 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:27.786845 | orchestrator | 2026-01-05 00:57:27 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:27.789489 | orchestrator | 2026-01-05 00:57:27 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:27.789651 | orchestrator | 2026-01-05 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:30.838214 | orchestrator | 2026-01-05 00:57:30 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:30.838520 | orchestrator | 2026-01-05 00:57:30 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:30.840526 | orchestrator | 2026-01-05 00:57:30 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:30.840565 | orchestrator | 2026-01-05 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:33.908235 | orchestrator | 2026-01-05 00:57:33 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:33.911654 | orchestrator | 2026-01-05 00:57:33 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:33.913352 | orchestrator | 2026-01-05 00:57:33 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:33.913404 | orchestrator | 2026-01-05 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:36.969561 | orchestrator | 2026-01-05 00:57:36 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:36.971092 | orchestrator | 2026-01-05 00:57:36 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:36.972735 | orchestrator | 2026-01-05 00:57:36 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:36.972783 | orchestrator | 2026-01-05 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:40.022949 | orchestrator | 2026-01-05 00:57:40 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:40.025363 | orchestrator | 2026-01-05 00:57:40 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:40.027195 | orchestrator | 2026-01-05 00:57:40 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:40.027272 | orchestrator | 2026-01-05 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:43.063897 | orchestrator | 2026-01-05 00:57:43 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:43.066621 | orchestrator | 2026-01-05 00:57:43 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:43.066823 | orchestrator | 2026-01-05 00:57:43 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:43.066917 | orchestrator | 2026-01-05 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:46.116175 | orchestrator | 2026-01-05 00:57:46 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:46.119021 | orchestrator | 2026-01-05 00:57:46 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:46.121742 | orchestrator | 2026-01-05 00:57:46 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:46.123742 | orchestrator | 2026-01-05 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:49.172949 | orchestrator | 2026-01-05 00:57:49 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:49.175360 | orchestrator | 2026-01-05 00:57:49 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:49.179636 | orchestrator | 2026-01-05 00:57:49 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:49.179708 | orchestrator | 2026-01-05 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:52.240324 | orchestrator | 2026-01-05 00:57:52 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:52.241741 | orchestrator | 2026-01-05 00:57:52 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:52.243176 | orchestrator | 2026-01-05 00:57:52 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:52.243237 | orchestrator | 2026-01-05 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:55.302804 | orchestrator | 2026-01-05 00:57:55 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:55.304644 | orchestrator | 2026-01-05 00:57:55 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:55.309642 | orchestrator | 2026-01-05 00:57:55 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:55.309714 | orchestrator | 2026-01-05 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:58.357006 | orchestrator | 2026-01-05 00:57:58 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:57:58.359123 | orchestrator | 2026-01-05 00:57:58 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:57:58.360915 | orchestrator | 2026-01-05 00:57:58 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:57:58.360968 | orchestrator | 2026-01-05 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:01.404624 | orchestrator | 2026-01-05 00:58:01 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:01.407060 | orchestrator | 2026-01-05 00:58:01 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:01.409150 | orchestrator | 2026-01-05 00:58:01 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:01.409225 | orchestrator | 2026-01-05 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:04.464713 | orchestrator | 2026-01-05 00:58:04 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:04.469092 | orchestrator | 2026-01-05 00:58:04 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:04.472121 | orchestrator | 2026-01-05 00:58:04 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:04.472192 | orchestrator | 2026-01-05 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:07.528972 | orchestrator | 2026-01-05 00:58:07 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:07.530898 | orchestrator | 2026-01-05 00:58:07 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:07.533636 | orchestrator | 2026-01-05 00:58:07 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:07.533695 | orchestrator | 2026-01-05 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:10.589351 | orchestrator | 2026-01-05 00:58:10 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:10.595833 | orchestrator | 2026-01-05 00:58:10 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:10.597827 | orchestrator | 2026-01-05 00:58:10 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:10.597975 | orchestrator | 2026-01-05 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:13.650331 | orchestrator | 2026-01-05 00:58:13 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:13.654102 | orchestrator | 2026-01-05 00:58:13 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:13.654284 | orchestrator | 2026-01-05 00:58:13 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:13.654589 | orchestrator | 2026-01-05 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:16.713392 | orchestrator | 2026-01-05 00:58:16 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:16.716385 | orchestrator | 2026-01-05 00:58:16 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:16.721668 | orchestrator | 2026-01-05 00:58:16 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:16.721770 | orchestrator | 2026-01-05 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:19.770646 | orchestrator | 2026-01-05 00:58:19 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:19.772307 | orchestrator | 2026-01-05 00:58:19 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:19.775293 | orchestrator | 2026-01-05 00:58:19 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:19.775369 | orchestrator | 2026-01-05 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:22.820562 | orchestrator | 2026-01-05 00:58:22 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:22.824108 | orchestrator | 2026-01-05 00:58:22 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:22.826509 | orchestrator | 2026-01-05 00:58:22 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:22.826554 | orchestrator | 2026-01-05 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:25.869355 | orchestrator | 2026-01-05 00:58:25 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:25.871218 | orchestrator | 2026-01-05 00:58:25 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:25.873046 | orchestrator | 2026-01-05 00:58:25 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:25.873087 | orchestrator | 2026-01-05 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:28.925818 | orchestrator | 2026-01-05 00:58:28 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:28.927422 | orchestrator | 2026-01-05 00:58:28 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:28.929758 | orchestrator | 2026-01-05 00:58:28 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:28.929881 | orchestrator | 2026-01-05 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:31.968555 | orchestrator | 2026-01-05 00:58:31 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:31.970253 | orchestrator | 2026-01-05 00:58:31 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:31.973242 | orchestrator | 2026-01-05 00:58:31 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:31.973295 | orchestrator | 2026-01-05 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:35.014940 | orchestrator | 2026-01-05 00:58:35 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:35.015599 | orchestrator | 2026-01-05 00:58:35 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:35.017280 | orchestrator | 2026-01-05 00:58:35 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:35.017372 | orchestrator | 2026-01-05 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:38.057708 | orchestrator | 2026-01-05 00:58:38 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:38.059635 | orchestrator | 2026-01-05 00:58:38 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:38.061162 | orchestrator | 2026-01-05 00:58:38 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:38.061254 | orchestrator | 2026-01-05 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:41.112138 | orchestrator | 2026-01-05 00:58:41 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state STARTED 2026-01-05 00:58:41.114694 | orchestrator | 2026-01-05 00:58:41 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:41.116386 | orchestrator | 2026-01-05 00:58:41 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:41.116437 | orchestrator | 2026-01-05 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:44.161525 | orchestrator | 2026-01-05 00:58:44.161625 | orchestrator | 2026-01-05 00:58:44.161638 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:58:44.161648 | orchestrator | 2026-01-05 00:58:44.161657 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:58:44.161666 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:00.263) 0:00:00.263 ******** 2026-01-05 00:58:44.161675 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:44.161686 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:44.161694 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:44.161703 | orchestrator | 2026-01-05 00:58:44.161712 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:58:44.161722 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:00.309) 0:00:00.572 ******** 2026-01-05 00:58:44.161730 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-05 00:58:44.161739 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-05 00:58:44.161747 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-05 00:58:44.161755 | orchestrator | 2026-01-05 00:58:44.161763 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-05 00:58:44.161772 | orchestrator | 2026-01-05 00:58:44.161780 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:58:44.161788 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:00.457) 0:00:01.029 ******** 2026-01-05 00:58:44.161798 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:58:44.161932 | orchestrator | 2026-01-05 00:58:44.161943 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-05 00:58:44.161948 | orchestrator | Monday 05 January 2026 00:55:52 +0000 (0:00:00.513) 0:00:01.543 ******** 2026-01-05 00:58:44.161952 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:58:44.161957 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:58:44.161962 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:58:44.161966 | orchestrator | 2026-01-05 00:58:44.161971 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-05 00:58:44.161976 | orchestrator | Monday 05 January 2026 00:55:53 +0000 (0:00:00.727) 0:00:02.271 ******** 2026-01-05 00:58:44.162273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162389 | orchestrator | 2026-01-05 00:58:44.162397 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:58:44.162404 | orchestrator | Monday 05 January 2026 00:55:55 +0000 (0:00:01.941) 0:00:04.212 ******** 2026-01-05 00:58:44.162411 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:58:44.162418 | orchestrator | 2026-01-05 00:58:44.162426 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-05 00:58:44.162441 | orchestrator | Monday 05 January 2026 00:55:55 +0000 (0:00:00.568) 0:00:04.780 ******** 2026-01-05 00:58:44.162449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162518 | orchestrator | 2026-01-05 00:58:44.162526 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-05 00:58:44.162533 | orchestrator | Monday 05 January 2026 00:55:58 +0000 (0:00:02.850) 0:00:07.631 ******** 2026-01-05 00:58:44.162541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.162553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.162565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.162587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.162595 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:44.162602 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:44.162613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.162628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.162640 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:44.162648 | orchestrator | 2026-01-05 00:58:44.162655 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-05 00:58:44.162662 | orchestrator | Monday 05 January 2026 00:55:59 +0000 (0:00:01.325) 0:00:08.957 ******** 2026-01-05 00:58:44.162669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.162677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.162684 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:44.162695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.162708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.162720 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:44.162727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.162735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.162742 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:44.162749 | orchestrator | 2026-01-05 00:58:44.162757 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-05 00:58:44.162764 | orchestrator | Monday 05 January 2026 00:56:01 +0000 (0:00:01.156) 0:00:10.113 ******** 2026-01-05 00:58:44.162775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localt2026-01-05 00:58:44 | INFO  | Task d9b0a641-6b27-4072-b53d-a40f1ce19331 is in state SUCCESS 2026-01-05 00:58:44.162879 | orchestrator | ime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162885 | orchestrator | 2026-01-05 00:58:44.162890 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-05 00:58:44.162894 | orchestrator | Monday 05 January 2026 00:56:03 +0000 (0:00:02.561) 0:00:12.674 ******** 2026-01-05 00:58:44.162899 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:44.162903 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:58:44.162908 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:58:44.162912 | orchestrator | 2026-01-05 00:58:44.162917 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-05 00:58:44.162922 | orchestrator | Monday 05 January 2026 00:56:06 +0000 (0:00:03.027) 0:00:15.701 ******** 2026-01-05 00:58:44.162926 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:44.162931 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:58:44.162935 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:58:44.162940 | orchestrator | 2026-01-05 00:58:44.162945 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-01-05 00:58:44.162949 | orchestrator | Monday 05 January 2026 00:56:08 +0000 (0:00:02.129) 0:00:17.831 ******** 2026-01-05 00:58:44.162954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 00:58:44.162980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.162994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-05 00:58:44.163007 | orchestrator | 2026-01-05 00:58:44.163012 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-01-05 00:58:44.163017 | orchestrator | Monday 05 January 2026 00:56:11 +0000 (0:00:02.306) 0:00:20.137 ******** 2026-01-05 00:58:44.163021 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:58:44.163026 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:58:44.163031 | orchestrator | } 2026-01-05 00:58:44.163035 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:58:44.163040 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:58:44.163045 | orchestrator | } 2026-01-05 00:58:44.163049 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:58:44.163054 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:58:44.163061 | orchestrator | } 2026-01-05 00:58:44.163066 | orchestrator | 2026-01-05 00:58:44.163070 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:58:44.163075 | orchestrator | Monday 05 January 2026 00:56:11 +0000 (0:00:00.350) 0:00:20.488 ******** 2026-01-05 00:58:44.163080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.163085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.163117 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:44.163126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.163143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.163148 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:44.163153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 00:58:44.163159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-05 00:58:44.163174 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:44.163185 | orchestrator | 2026-01-05 00:58:44.163195 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:58:44.163202 | orchestrator | Monday 05 January 2026 00:56:12 +0000 (0:00:01.501) 0:00:21.990 ******** 2026-01-05 00:58:44.163209 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:44.163216 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:44.163224 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:44.163231 | orchestrator | 2026-01-05 00:58:44.163239 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 00:58:44.163246 | orchestrator | Monday 05 January 2026 00:56:13 +0000 (0:00:00.420) 0:00:22.410 ******** 2026-01-05 00:58:44.163254 | orchestrator | 2026-01-05 00:58:44.163261 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 00:58:44.163265 | orchestrator | Monday 05 January 2026 00:56:13 +0000 (0:00:00.067) 0:00:22.478 ******** 2026-01-05 00:58:44.163270 | orchestrator | 2026-01-05 00:58:44.163275 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 00:58:44.163279 | orchestrator | Monday 05 January 2026 00:56:13 +0000 (0:00:00.069) 0:00:22.548 ******** 2026-01-05 00:58:44.163284 | orchestrator | 2026-01-05 00:58:44.163293 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-05 00:58:44.163298 | orchestrator | Monday 05 January 2026 00:56:13 +0000 (0:00:00.113) 0:00:22.661 ******** 2026-01-05 00:58:44.163302 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:44.163323 | orchestrator | 2026-01-05 00:58:44.163328 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-05 00:58:44.163332 | orchestrator | Monday 05 January 2026 00:56:13 +0000 (0:00:00.255) 0:00:22.917 ******** 2026-01-05 00:58:44.163337 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:44.163341 | orchestrator | 2026-01-05 00:58:44.163346 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-05 00:58:44.163350 | orchestrator | Monday 05 January 2026 00:56:14 +0000 (0:00:00.197) 0:00:23.115 ******** 2026-01-05 00:58:44.163355 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:44.163359 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:58:44.163364 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:58:44.163369 | orchestrator | 2026-01-05 00:58:44.163373 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-05 00:58:44.163378 | orchestrator | Monday 05 January 2026 00:57:15 +0000 (0:01:01.051) 0:01:24.166 ******** 2026-01-05 00:58:44.163382 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:44.163387 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:58:44.163391 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:58:44.163396 | orchestrator | 2026-01-05 00:58:44.163405 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:58:44.163409 | orchestrator | Monday 05 January 2026 00:58:30 +0000 (0:01:15.866) 0:02:40.033 ******** 2026-01-05 00:58:44.163414 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:58:44.163419 | orchestrator | 2026-01-05 00:58:44.163423 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-05 00:58:44.163428 | orchestrator | Monday 05 January 2026 00:58:31 +0000 (0:00:00.622) 0:02:40.656 ******** 2026-01-05 00:58:44.163432 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:44.163437 | orchestrator | 2026-01-05 00:58:44.163442 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-05 00:58:44.163447 | orchestrator | Monday 05 January 2026 00:58:34 +0000 (0:00:02.605) 0:02:43.261 ******** 2026-01-05 00:58:44.163451 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:44.163461 | orchestrator | 2026-01-05 00:58:44.163466 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-05 00:58:44.163470 | orchestrator | Monday 05 January 2026 00:58:36 +0000 (0:00:02.432) 0:02:45.694 ******** 2026-01-05 00:58:44.163475 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:44.163479 | orchestrator | 2026-01-05 00:58:44.163484 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-05 00:58:44.163489 | orchestrator | Monday 05 January 2026 00:58:39 +0000 (0:00:03.363) 0:02:49.058 ******** 2026-01-05 00:58:44.163493 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:44.163498 | orchestrator | 2026-01-05 00:58:44.163502 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:58:44.163508 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:58:44.163514 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:58:44.163519 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:58:44.163523 | orchestrator | 2026-01-05 00:58:44.163528 | orchestrator | 2026-01-05 00:58:44.163532 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:58:44.163537 | orchestrator | Monday 05 January 2026 00:58:42 +0000 (0:00:02.566) 0:02:51.624 ******** 2026-01-05 00:58:44.163542 | orchestrator | =============================================================================== 2026-01-05 00:58:44.163546 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.87s 2026-01-05 00:58:44.163551 | orchestrator | opensearch : Restart opensearch container ------------------------------ 61.05s 2026-01-05 00:58:44.163555 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.36s 2026-01-05 00:58:44.163560 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.03s 2026-01-05 00:58:44.163564 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.85s 2026-01-05 00:58:44.163569 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.61s 2026-01-05 00:58:44.163573 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2026-01-05 00:58:44.163578 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.56s 2026-01-05 00:58:44.163583 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.43s 2026-01-05 00:58:44.163587 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.31s 2026-01-05 00:58:44.163592 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.13s 2026-01-05 00:58:44.163596 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.94s 2026-01-05 00:58:44.163601 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.50s 2026-01-05 00:58:44.163605 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.33s 2026-01-05 00:58:44.163610 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.16s 2026-01-05 00:58:44.163614 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.73s 2026-01-05 00:58:44.163622 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-01-05 00:58:44.163627 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-01-05 00:58:44.163632 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-01-05 00:58:44.163636 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-01-05 00:58:44.169506 | orchestrator | 2026-01-05 00:58:44 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:44.169607 | orchestrator | 2026-01-05 00:58:44 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:44.169615 | orchestrator | 2026-01-05 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:47.218278 | orchestrator | 2026-01-05 00:58:47 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:47.219166 | orchestrator | 2026-01-05 00:58:47 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:47.219280 | orchestrator | 2026-01-05 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:50.274126 | orchestrator | 2026-01-05 00:58:50 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:50.276053 | orchestrator | 2026-01-05 00:58:50 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:50.276523 | orchestrator | 2026-01-05 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:53.332222 | orchestrator | 2026-01-05 00:58:53 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:53.335895 | orchestrator | 2026-01-05 00:58:53 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:53.335962 | orchestrator | 2026-01-05 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:56.380870 | orchestrator | 2026-01-05 00:58:56 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:56.382494 | orchestrator | 2026-01-05 00:58:56 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:56.382559 | orchestrator | 2026-01-05 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:59.432330 | orchestrator | 2026-01-05 00:58:59 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:58:59.434175 | orchestrator | 2026-01-05 00:58:59 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:58:59.434216 | orchestrator | 2026-01-05 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:02.473703 | orchestrator | 2026-01-05 00:59:02 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:59:02.475997 | orchestrator | 2026-01-05 00:59:02 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:02.476118 | orchestrator | 2026-01-05 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:05.518966 | orchestrator | 2026-01-05 00:59:05 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:59:05.519694 | orchestrator | 2026-01-05 00:59:05 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:05.519740 | orchestrator | 2026-01-05 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:08.562889 | orchestrator | 2026-01-05 00:59:08 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:59:08.565628 | orchestrator | 2026-01-05 00:59:08 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:08.565774 | orchestrator | 2026-01-05 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:11.609596 | orchestrator | 2026-01-05 00:59:11 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:59:11.610684 | orchestrator | 2026-01-05 00:59:11 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:11.610754 | orchestrator | 2026-01-05 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:14.651264 | orchestrator | 2026-01-05 00:59:14 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:59:14.651622 | orchestrator | 2026-01-05 00:59:14 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:14.651823 | orchestrator | 2026-01-05 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:17.695214 | orchestrator | 2026-01-05 00:59:17 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:59:17.697465 | orchestrator | 2026-01-05 00:59:17 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:17.697544 | orchestrator | 2026-01-05 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:20.743363 | orchestrator | 2026-01-05 00:59:20 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:59:20.743428 | orchestrator | 2026-01-05 00:59:20 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:20.743437 | orchestrator | 2026-01-05 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:23.798833 | orchestrator | 2026-01-05 00:59:23 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state STARTED 2026-01-05 00:59:23.800529 | orchestrator | 2026-01-05 00:59:23 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:23.800723 | orchestrator | 2026-01-05 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:26.853237 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:26.860439 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task afd05dc6-00c8-484e-80f5-bf109d8fc2f7 is in state SUCCESS 2026-01-05 00:59:26.861994 | orchestrator | 2026-01-05 00:59:26.862099 | orchestrator | 2026-01-05 00:59:26.862109 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-05 00:59:26.862117 | orchestrator | 2026-01-05 00:59:26.862122 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-05 00:59:26.862128 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:00.096) 0:00:00.096 ******** 2026-01-05 00:59:26.862134 | orchestrator | ok: [localhost] => { 2026-01-05 00:59:26.862173 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-05 00:59:26.862179 | orchestrator | } 2026-01-05 00:59:26.862185 | orchestrator | 2026-01-05 00:59:26.862191 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-05 00:59:26.862196 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:00.064) 0:00:00.161 ******** 2026-01-05 00:59:26.862202 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-05 00:59:26.862209 | orchestrator | ...ignoring 2026-01-05 00:59:26.862214 | orchestrator | 2026-01-05 00:59:26.862220 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-05 00:59:26.862225 | orchestrator | Monday 05 January 2026 00:55:53 +0000 (0:00:02.870) 0:00:03.031 ******** 2026-01-05 00:59:26.862230 | orchestrator | skipping: [localhost] 2026-01-05 00:59:26.862235 | orchestrator | 2026-01-05 00:59:26.862241 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-05 00:59:26.862292 | orchestrator | Monday 05 January 2026 00:55:54 +0000 (0:00:00.074) 0:00:03.106 ******** 2026-01-05 00:59:26.862298 | orchestrator | ok: [localhost] 2026-01-05 00:59:26.862303 | orchestrator | 2026-01-05 00:59:26.862308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:59:26.862313 | orchestrator | 2026-01-05 00:59:26.862318 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:59:26.862323 | orchestrator | Monday 05 January 2026 00:55:54 +0000 (0:00:00.178) 0:00:03.285 ******** 2026-01-05 00:59:26.862346 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.862511 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.862518 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.862523 | orchestrator | 2026-01-05 00:59:26.862528 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:59:26.862533 | orchestrator | Monday 05 January 2026 00:55:54 +0000 (0:00:00.351) 0:00:03.636 ******** 2026-01-05 00:59:26.862538 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-05 00:59:26.862543 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-05 00:59:26.862548 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-05 00:59:26.862554 | orchestrator | 2026-01-05 00:59:26.862559 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-05 00:59:26.862564 | orchestrator | 2026-01-05 00:59:26.862569 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-05 00:59:26.862574 | orchestrator | Monday 05 January 2026 00:55:55 +0000 (0:00:00.671) 0:00:04.308 ******** 2026-01-05 00:59:26.862580 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:59:26.862585 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 00:59:26.862590 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 00:59:26.862595 | orchestrator | 2026-01-05 00:59:26.862600 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 00:59:26.862605 | orchestrator | Monday 05 January 2026 00:55:55 +0000 (0:00:00.390) 0:00:04.698 ******** 2026-01-05 00:59:26.862611 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:26.862616 | orchestrator | 2026-01-05 00:59:26.862624 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-05 00:59:26.862633 | orchestrator | Monday 05 January 2026 00:55:56 +0000 (0:00:00.664) 0:00:05.362 ******** 2026-01-05 00:59:26.862773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.862794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.862818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.862828 | orchestrator | 2026-01-05 00:59:26.862857 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-05 00:59:26.862867 | orchestrator | Monday 05 January 2026 00:55:59 +0000 (0:00:02.823) 0:00:08.186 ******** 2026-01-05 00:59:26.862875 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.862884 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.862892 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.862900 | orchestrator | 2026-01-05 00:59:26.862908 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-05 00:59:26.862916 | orchestrator | Monday 05 January 2026 00:55:59 +0000 (0:00:00.788) 0:00:08.975 ******** 2026-01-05 00:59:26.862931 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.862939 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.862948 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.862957 | orchestrator | 2026-01-05 00:59:26.862965 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-05 00:59:26.862974 | orchestrator | Monday 05 January 2026 00:56:01 +0000 (0:00:01.664) 0:00:10.640 ******** 2026-01-05 00:59:26.862984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.863004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.863047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.863053 | orchestrator | 2026-01-05 00:59:26.863058 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-05 00:59:26.863063 | orchestrator | Monday 05 January 2026 00:56:05 +0000 (0:00:03.946) 0:00:14.586 ******** 2026-01-05 00:59:26.863068 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863073 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863078 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.863083 | orchestrator | 2026-01-05 00:59:26.863088 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-05 00:59:26.863093 | orchestrator | Monday 05 January 2026 00:56:06 +0000 (0:00:01.162) 0:00:15.748 ******** 2026-01-05 00:59:26.863098 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.863103 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:26.863108 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:26.863113 | orchestrator | 2026-01-05 00:59:26.863119 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 00:59:26.863124 | orchestrator | Monday 05 January 2026 00:56:11 +0000 (0:00:04.891) 0:00:20.640 ******** 2026-01-05 00:59:26.863129 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:26.863134 | orchestrator | 2026-01-05 00:59:26.863140 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 00:59:26.863149 | orchestrator | Monday 05 January 2026 00:56:12 +0000 (0:00:00.581) 0:00:21.221 ******** 2026-01-05 00:59:26.863163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863174 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863185 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863209 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863214 | orchestrator | 2026-01-05 00:59:26.863219 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 00:59:26.863224 | orchestrator | Monday 05 January 2026 00:56:15 +0000 (0:00:03.673) 0:00:24.895 ******** 2026-01-05 00:59:26.863230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863235 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863258 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863269 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863274 | orchestrator | 2026-01-05 00:59:26.863279 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 00:59:26.863285 | orchestrator | Monday 05 January 2026 00:56:19 +0000 (0:00:03.520) 0:00:28.415 ******** 2026-01-05 00:59:26.863293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863306 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863323 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863344 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863349 | orchestrator | 2026-01-05 00:59:26.863354 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-01-05 00:59:26.863359 | orchestrator | Monday 05 January 2026 00:56:22 +0000 (0:00:02.888) 0:00:31.304 ******** 2026-01-05 00:59:26.863372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.863383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.863399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:59:26.863406 | orchestrator | 2026-01-05 00:59:26.863412 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-01-05 00:59:26.863419 | orchestrator | Monday 05 January 2026 00:56:26 +0000 (0:00:04.233) 0:00:35.538 ******** 2026-01-05 00:59:26.863425 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 00:59:26.863431 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:59:26.863437 | orchestrator | } 2026-01-05 00:59:26.863443 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 00:59:26.863449 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:59:26.863455 | orchestrator | } 2026-01-05 00:59:26.863461 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 00:59:26.863468 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 00:59:26.863474 | orchestrator | } 2026-01-05 00:59:26.863480 | orchestrator | 2026-01-05 00:59:26.863485 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 00:59:26.863492 | orchestrator | Monday 05 January 2026 00:56:27 +0000 (0:00:00.678) 0:00:36.216 ******** 2026-01-05 00:59:26.863502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863512 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863549 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.863570 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863575 | orchestrator | 2026-01-05 00:59:26.863582 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-01-05 00:59:26.863588 | orchestrator | Monday 05 January 2026 00:56:30 +0000 (0:00:03.364) 0:00:39.580 ******** 2026-01-05 00:59:26.863593 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863600 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863605 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863612 | orchestrator | 2026-01-05 00:59:26.863618 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-01-05 00:59:26.863624 | orchestrator | Monday 05 January 2026 00:56:30 +0000 (0:00:00.311) 0:00:39.892 ******** 2026-01-05 00:59:26.863631 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863636 | orchestrator | 2026-01-05 00:59:26.863642 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-01-05 00:59:26.863648 | orchestrator | Monday 05 January 2026 00:56:30 +0000 (0:00:00.127) 0:00:40.020 ******** 2026-01-05 00:59:26.863654 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863661 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863666 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863672 | orchestrator | 2026-01-05 00:59:26.863678 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-01-05 00:59:26.863684 | orchestrator | Monday 05 January 2026 00:56:31 +0000 (0:00:00.614) 0:00:40.635 ******** 2026-01-05 00:59:26.863694 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863700 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863706 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863713 | orchestrator | 2026-01-05 00:59:26.863719 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-01-05 00:59:26.863725 | orchestrator | Monday 05 January 2026 00:56:31 +0000 (0:00:00.345) 0:00:40.980 ******** 2026-01-05 00:59:26.863731 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863737 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863743 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863749 | orchestrator | 2026-01-05 00:59:26.863755 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-01-05 00:59:26.863761 | orchestrator | Monday 05 January 2026 00:56:32 +0000 (0:00:00.351) 0:00:41.332 ******** 2026-01-05 00:59:26.863767 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863773 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863778 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863784 | orchestrator | 2026-01-05 00:59:26.863789 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-01-05 00:59:26.863794 | orchestrator | Monday 05 January 2026 00:56:32 +0000 (0:00:00.308) 0:00:41.640 ******** 2026-01-05 00:59:26.863799 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863804 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863810 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863815 | orchestrator | 2026-01-05 00:59:26.863824 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-01-05 00:59:26.863829 | orchestrator | Monday 05 January 2026 00:56:33 +0000 (0:00:00.586) 0:00:42.226 ******** 2026-01-05 00:59:26.863835 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863840 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863845 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.863850 | orchestrator | 2026-01-05 00:59:26.863855 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-01-05 00:59:26.863860 | orchestrator | Monday 05 January 2026 00:56:33 +0000 (0:00:00.457) 0:00:42.684 ******** 2026-01-05 00:59:26.863940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:59:26.863947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:59:26.863952 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:59:26.863957 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.863962 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 00:59:26.863967 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 00:59:26.863972 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 00:59:26.863977 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.863983 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 00:59:26.863988 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 00:59:26.863995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 00:59:26.864004 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864058 | orchestrator | 2026-01-05 00:59:26.864066 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-01-05 00:59:26.864074 | orchestrator | Monday 05 January 2026 00:56:34 +0000 (0:00:00.456) 0:00:43.141 ******** 2026-01-05 00:59:26.864082 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864090 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864098 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864107 | orchestrator | 2026-01-05 00:59:26.864115 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-01-05 00:59:26.864123 | orchestrator | Monday 05 January 2026 00:56:34 +0000 (0:00:00.417) 0:00:43.558 ******** 2026-01-05 00:59:26.864131 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864140 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864149 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864157 | orchestrator | 2026-01-05 00:59:26.864166 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-01-05 00:59:26.864176 | orchestrator | Monday 05 January 2026 00:56:35 +0000 (0:00:00.535) 0:00:44.093 ******** 2026-01-05 00:59:26.864185 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864193 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864203 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864209 | orchestrator | 2026-01-05 00:59:26.864223 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-01-05 00:59:26.864231 | orchestrator | Monday 05 January 2026 00:56:35 +0000 (0:00:00.335) 0:00:44.429 ******** 2026-01-05 00:59:26.864244 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864254 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864261 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864270 | orchestrator | 2026-01-05 00:59:26.864278 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-01-05 00:59:26.864286 | orchestrator | Monday 05 January 2026 00:56:35 +0000 (0:00:00.383) 0:00:44.812 ******** 2026-01-05 00:59:26.864293 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864302 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864310 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864318 | orchestrator | 2026-01-05 00:59:26.864327 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-01-05 00:59:26.864344 | orchestrator | Monday 05 January 2026 00:56:36 +0000 (0:00:00.354) 0:00:45.167 ******** 2026-01-05 00:59:26.864352 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864360 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864369 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864377 | orchestrator | 2026-01-05 00:59:26.864385 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-01-05 00:59:26.864392 | orchestrator | Monday 05 January 2026 00:56:36 +0000 (0:00:00.350) 0:00:45.518 ******** 2026-01-05 00:59:26.864400 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864408 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864418 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864425 | orchestrator | 2026-01-05 00:59:26.864433 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-01-05 00:59:26.864451 | orchestrator | Monday 05 January 2026 00:56:37 +0000 (0:00:00.566) 0:00:46.084 ******** 2026-01-05 00:59:26.864459 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864467 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864475 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864483 | orchestrator | 2026-01-05 00:59:26.864491 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-01-05 00:59:26.864499 | orchestrator | Monday 05 January 2026 00:56:37 +0000 (0:00:00.362) 0:00:46.447 ******** 2026-01-05 00:59:26.864508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.864520 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.864552 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.864578 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864586 | orchestrator | 2026-01-05 00:59:26.864594 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-01-05 00:59:26.864601 | orchestrator | Monday 05 January 2026 00:56:39 +0000 (0:00:02.431) 0:00:48.878 ******** 2026-01-05 00:59:26.864609 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864618 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864626 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864633 | orchestrator | 2026-01-05 00:59:26.864641 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-01-05 00:59:26.864650 | orchestrator | Monday 05 January 2026 00:56:40 +0000 (0:00:00.334) 0:00:49.213 ******** 2026-01-05 00:59:26.864664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.864692 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.864711 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:26.864741 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864751 | orchestrator | 2026-01-05 00:59:26.864759 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-01-05 00:59:26.864768 | orchestrator | Monday 05 January 2026 00:56:42 +0000 (0:00:02.428) 0:00:51.641 ******** 2026-01-05 00:59:26.864778 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864786 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864795 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864803 | orchestrator | 2026-01-05 00:59:26.864811 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-05 00:59:26.864824 | orchestrator | Monday 05 January 2026 00:56:42 +0000 (0:00:00.403) 0:00:52.045 ******** 2026-01-05 00:59:26.864833 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864843 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864852 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864862 | orchestrator | 2026-01-05 00:59:26.864871 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-05 00:59:26.864880 | orchestrator | Monday 05 January 2026 00:56:43 +0000 (0:00:00.359) 0:00:52.405 ******** 2026-01-05 00:59:26.864888 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864897 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864905 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864914 | orchestrator | 2026-01-05 00:59:26.864924 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-05 00:59:26.864933 | orchestrator | Monday 05 January 2026 00:56:43 +0000 (0:00:00.333) 0:00:52.738 ******** 2026-01-05 00:59:26.864942 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.864951 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.864960 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.864969 | orchestrator | 2026-01-05 00:59:26.864979 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-05 00:59:26.864988 | orchestrator | Monday 05 January 2026 00:56:44 +0000 (0:00:00.806) 0:00:53.545 ******** 2026-01-05 00:59:26.864996 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.865005 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.865070 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.865080 | orchestrator | 2026-01-05 00:59:26.865089 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-05 00:59:26.865098 | orchestrator | Monday 05 January 2026 00:56:44 +0000 (0:00:00.312) 0:00:53.858 ******** 2026-01-05 00:59:26.865107 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.865116 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:26.865124 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:26.865134 | orchestrator | 2026-01-05 00:59:26.865154 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-05 00:59:26.865162 | orchestrator | Monday 05 January 2026 00:56:45 +0000 (0:00:00.817) 0:00:54.675 ******** 2026-01-05 00:59:26.865171 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.865180 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.865189 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.865198 | orchestrator | 2026-01-05 00:59:26.865207 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-05 00:59:26.865215 | orchestrator | Monday 05 January 2026 00:56:46 +0000 (0:00:00.581) 0:00:55.257 ******** 2026-01-05 00:59:26.865223 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.865231 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.865238 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.865247 | orchestrator | 2026-01-05 00:59:26.865255 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-05 00:59:26.865262 | orchestrator | Monday 05 January 2026 00:56:46 +0000 (0:00:00.369) 0:00:55.626 ******** 2026-01-05 00:59:26.865271 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-05 00:59:26.865281 | orchestrator | ...ignoring 2026-01-05 00:59:26.865290 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-05 00:59:26.865298 | orchestrator | ...ignoring 2026-01-05 00:59:26.865306 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-05 00:59:26.865315 | orchestrator | ...ignoring 2026-01-05 00:59:26.865324 | orchestrator | 2026-01-05 00:59:26.865332 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-05 00:59:26.865340 | orchestrator | Monday 05 January 2026 00:56:57 +0000 (0:00:10.766) 0:01:06.393 ******** 2026-01-05 00:59:26.865349 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.865357 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.865365 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.865374 | orchestrator | 2026-01-05 00:59:26.865382 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-05 00:59:26.865390 | orchestrator | Monday 05 January 2026 00:56:57 +0000 (0:00:00.330) 0:01:06.724 ******** 2026-01-05 00:59:26.865406 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.865415 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.865424 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.865433 | orchestrator | 2026-01-05 00:59:26.865442 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-05 00:59:26.865450 | orchestrator | Monday 05 January 2026 00:56:58 +0000 (0:00:00.545) 0:01:07.270 ******** 2026-01-05 00:59:26.865458 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.865466 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.865473 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.865481 | orchestrator | 2026-01-05 00:59:26.865489 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-05 00:59:26.865498 | orchestrator | Monday 05 January 2026 00:56:58 +0000 (0:00:00.345) 0:01:07.615 ******** 2026-01-05 00:59:26.865507 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.865515 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.865524 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.865532 | orchestrator | 2026-01-05 00:59:26.865540 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-05 00:59:26.865549 | orchestrator | Monday 05 January 2026 00:56:58 +0000 (0:00:00.349) 0:01:07.964 ******** 2026-01-05 00:59:26.865558 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.865566 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.865574 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.865583 | orchestrator | 2026-01-05 00:59:26.865592 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-05 00:59:26.865607 | orchestrator | Monday 05 January 2026 00:56:59 +0000 (0:00:00.333) 0:01:08.298 ******** 2026-01-05 00:59:26.865616 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.865634 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.865643 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.865651 | orchestrator | 2026-01-05 00:59:26.865660 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 00:59:26.865669 | orchestrator | Monday 05 January 2026 00:56:59 +0000 (0:00:00.618) 0:01:08.917 ******** 2026-01-05 00:59:26.865678 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.865686 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.865695 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-05 00:59:26.865703 | orchestrator | 2026-01-05 00:59:26.865711 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-05 00:59:26.865718 | orchestrator | Monday 05 January 2026 00:57:00 +0000 (0:00:00.425) 0:01:09.342 ******** 2026-01-05 00:59:26.865726 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.865734 | orchestrator | 2026-01-05 00:59:26.865742 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-05 00:59:26.865751 | orchestrator | Monday 05 January 2026 00:57:10 +0000 (0:00:10.403) 0:01:19.746 ******** 2026-01-05 00:59:26.865760 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.865769 | orchestrator | 2026-01-05 00:59:26.865778 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 00:59:26.865787 | orchestrator | Monday 05 January 2026 00:57:10 +0000 (0:00:00.154) 0:01:19.900 ******** 2026-01-05 00:59:26.865795 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.865803 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.865812 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.865820 | orchestrator | 2026-01-05 00:59:26.865829 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-05 00:59:26.865837 | orchestrator | Monday 05 January 2026 00:57:11 +0000 (0:00:00.919) 0:01:20.820 ******** 2026-01-05 00:59:26.865845 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.865854 | orchestrator | 2026-01-05 00:59:26.865862 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-05 00:59:26.865871 | orchestrator | Monday 05 January 2026 00:57:20 +0000 (0:00:08.462) 0:01:29.282 ******** 2026-01-05 00:59:26.865880 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2026-01-05 00:59:26.865890 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.865898 | orchestrator | 2026-01-05 00:59:26.865907 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-05 00:59:26.865915 | orchestrator | Monday 05 January 2026 00:57:27 +0000 (0:00:07.259) 0:01:36.542 ******** 2026-01-05 00:59:26.865924 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.865933 | orchestrator | 2026-01-05 00:59:26.865942 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-05 00:59:26.865951 | orchestrator | Monday 05 January 2026 00:57:29 +0000 (0:00:02.392) 0:01:38.935 ******** 2026-01-05 00:59:26.865960 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.865969 | orchestrator | 2026-01-05 00:59:26.865978 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-05 00:59:26.865986 | orchestrator | Monday 05 January 2026 00:57:29 +0000 (0:00:00.128) 0:01:39.063 ******** 2026-01-05 00:59:26.865994 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.866003 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.866069 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.866080 | orchestrator | 2026-01-05 00:59:26.866089 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-05 00:59:26.866097 | orchestrator | Monday 05 January 2026 00:57:30 +0000 (0:00:00.426) 0:01:39.489 ******** 2026-01-05 00:59:26.866107 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.866125 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-05 00:59:26.866134 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:26.866144 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:26.866151 | orchestrator | 2026-01-05 00:59:26.866159 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 00:59:26.866342 | orchestrator | skipping: no hosts matched 2026-01-05 00:59:26.866360 | orchestrator | 2026-01-05 00:59:26.866369 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 00:59:26.866377 | orchestrator | 2026-01-05 00:59:26.866386 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 00:59:26.866402 | orchestrator | Monday 05 January 2026 00:57:30 +0000 (0:00:00.578) 0:01:40.068 ******** 2026-01-05 00:59:26.866410 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:26.866418 | orchestrator | 2026-01-05 00:59:26.866426 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 00:59:26.866434 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:17.606) 0:01:57.674 ******** 2026-01-05 00:59:26.866442 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.866450 | orchestrator | 2026-01-05 00:59:26.866459 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 00:59:26.866467 | orchestrator | Monday 05 January 2026 00:58:04 +0000 (0:00:15.659) 0:02:13.333 ******** 2026-01-05 00:59:26.866474 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.866481 | orchestrator | 2026-01-05 00:59:26.866489 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 00:59:26.866497 | orchestrator | 2026-01-05 00:59:26.866505 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 00:59:26.866513 | orchestrator | Monday 05 January 2026 00:58:06 +0000 (0:00:02.563) 0:02:15.897 ******** 2026-01-05 00:59:26.866521 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:26.866528 | orchestrator | 2026-01-05 00:59:26.866536 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 00:59:26.866544 | orchestrator | Monday 05 January 2026 00:58:30 +0000 (0:00:24.108) 0:02:40.006 ******** 2026-01-05 00:59:26.866551 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.866559 | orchestrator | 2026-01-05 00:59:26.866566 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 00:59:26.866572 | orchestrator | Monday 05 January 2026 00:58:41 +0000 (0:00:10.629) 0:02:50.635 ******** 2026-01-05 00:59:26.866577 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.866581 | orchestrator | 2026-01-05 00:59:26.866598 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-05 00:59:26.866604 | orchestrator | 2026-01-05 00:59:26.866609 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 00:59:26.866614 | orchestrator | Monday 05 January 2026 00:58:43 +0000 (0:00:02.359) 0:02:52.994 ******** 2026-01-05 00:59:26.866619 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.866624 | orchestrator | 2026-01-05 00:59:26.866629 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 00:59:26.866634 | orchestrator | Monday 05 January 2026 00:58:56 +0000 (0:00:12.365) 0:03:05.359 ******** 2026-01-05 00:59:26.866638 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.866643 | orchestrator | 2026-01-05 00:59:26.866648 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 00:59:26.866653 | orchestrator | Monday 05 January 2026 00:59:00 +0000 (0:00:04.615) 0:03:09.975 ******** 2026-01-05 00:59:26.866658 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.866663 | orchestrator | 2026-01-05 00:59:26.866667 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-05 00:59:26.866672 | orchestrator | 2026-01-05 00:59:26.866677 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-05 00:59:26.866682 | orchestrator | Monday 05 January 2026 00:59:03 +0000 (0:00:02.568) 0:03:12.544 ******** 2026-01-05 00:59:26.866697 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:26.866702 | orchestrator | 2026-01-05 00:59:26.866707 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-05 00:59:26.866712 | orchestrator | Monday 05 January 2026 00:59:04 +0000 (0:00:00.571) 0:03:13.115 ******** 2026-01-05 00:59:26.866717 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.866721 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.866726 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.866731 | orchestrator | 2026-01-05 00:59:26.866736 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-05 00:59:26.866744 | orchestrator | Monday 05 January 2026 00:59:06 +0000 (0:00:02.315) 0:03:15.431 ******** 2026-01-05 00:59:26.866752 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.866759 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.866767 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.866775 | orchestrator | 2026-01-05 00:59:26.866782 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-05 00:59:26.866789 | orchestrator | Monday 05 January 2026 00:59:08 +0000 (0:00:02.552) 0:03:17.984 ******** 2026-01-05 00:59:26.866796 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.866804 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.866812 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.866819 | orchestrator | 2026-01-05 00:59:26.866827 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-05 00:59:26.866835 | orchestrator | Monday 05 January 2026 00:59:11 +0000 (0:00:02.468) 0:03:20.453 ******** 2026-01-05 00:59:26.866843 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.866850 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.866858 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:26.866866 | orchestrator | 2026-01-05 00:59:26.866874 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-05 00:59:26.866881 | orchestrator | Monday 05 January 2026 00:59:14 +0000 (0:00:02.727) 0:03:23.180 ******** 2026-01-05 00:59:26.866888 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.866895 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.866902 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.866909 | orchestrator | 2026-01-05 00:59:26.866916 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-05 00:59:26.866924 | orchestrator | Monday 05 January 2026 00:59:18 +0000 (0:00:04.589) 0:03:27.770 ******** 2026-01-05 00:59:26.866931 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.866939 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.866948 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.866955 | orchestrator | 2026-01-05 00:59:26.866964 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-05 00:59:26.866973 | orchestrator | Monday 05 January 2026 00:59:20 +0000 (0:00:02.266) 0:03:30.036 ******** 2026-01-05 00:59:26.866981 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.866989 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.866997 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.867054 | orchestrator | 2026-01-05 00:59:26.867075 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-05 00:59:26.867084 | orchestrator | Monday 05 January 2026 00:59:21 +0000 (0:00:00.791) 0:03:30.827 ******** 2026-01-05 00:59:26.867093 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:26.867101 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:26.867107 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:26.867112 | orchestrator | 2026-01-05 00:59:26.867118 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-05 00:59:26.867124 | orchestrator | Monday 05 January 2026 00:59:24 +0000 (0:00:02.298) 0:03:33.125 ******** 2026-01-05 00:59:26.867130 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:26.867136 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:26.867149 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:26.867155 | orchestrator | 2026-01-05 00:59:26.867160 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:59:26.867167 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-05 00:59:26.867173 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-01-05 00:59:26.867180 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-05 00:59:26.867193 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-05 00:59:26.867199 | orchestrator | 2026-01-05 00:59:26.867204 | orchestrator | 2026-01-05 00:59:26.867208 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:59:26.867213 | orchestrator | Monday 05 January 2026 00:59:24 +0000 (0:00:00.485) 0:03:33.611 ******** 2026-01-05 00:59:26.867218 | orchestrator | =============================================================================== 2026-01-05 00:59:26.867223 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.71s 2026-01-05 00:59:26.867228 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.29s 2026-01-05 00:59:26.867233 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.37s 2026-01-05 00:59:26.867238 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.77s 2026-01-05 00:59:26.867243 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.40s 2026-01-05 00:59:26.867248 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.46s 2026-01-05 00:59:26.867252 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.26s 2026-01-05 00:59:26.867257 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.92s 2026-01-05 00:59:26.867262 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.89s 2026-01-05 00:59:26.867267 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.62s 2026-01-05 00:59:26.867272 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.59s 2026-01-05 00:59:26.867276 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.23s 2026-01-05 00:59:26.867281 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.95s 2026-01-05 00:59:26.867286 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.67s 2026-01-05 00:59:26.867291 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.52s 2026-01-05 00:59:26.867296 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.36s 2026-01-05 00:59:26.867301 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.89s 2026-01-05 00:59:26.867306 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2026-01-05 00:59:26.867311 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.82s 2026-01-05 00:59:26.867316 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.73s 2026-01-05 00:59:26.867321 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:26.867326 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:26.867331 | orchestrator | 2026-01-05 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:29.912789 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:29.915477 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:29.915840 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:29.916115 | orchestrator | 2026-01-05 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:32.965071 | orchestrator | 2026-01-05 00:59:32 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:32.965182 | orchestrator | 2026-01-05 00:59:32 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:32.966384 | orchestrator | 2026-01-05 00:59:32 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:32.966413 | orchestrator | 2026-01-05 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:36.011334 | orchestrator | 2026-01-05 00:59:36 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:36.013572 | orchestrator | 2026-01-05 00:59:36 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state STARTED 2026-01-05 00:59:36.015368 | orchestrator | 2026-01-05 00:59:36 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:36.015858 | orchestrator | 2026-01-05 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:39.062959 | orchestrator | 2026-01-05 00:59:39 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:39.069083 | orchestrator | 2026-01-05 00:59:39 | INFO  | Task 21dc5818-5474-4790-bcc2-8f4cabeea70f is in state SUCCESS 2026-01-05 00:59:39.069974 | orchestrator | 2026-01-05 00:59:39.070220 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:59:39.070239 | orchestrator | 2.16.14 2026-01-05 00:59:39.070247 | orchestrator | 2026-01-05 00:59:39.070254 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-05 00:59:39.070262 | orchestrator | 2026-01-05 00:59:39.070268 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-05 00:59:39.070275 | orchestrator | Monday 05 January 2026 00:57:25 +0000 (0:00:00.618) 0:00:00.618 ******** 2026-01-05 00:59:39.070281 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:59:39.070288 | orchestrator | 2026-01-05 00:59:39.070294 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-05 00:59:39.070300 | orchestrator | Monday 05 January 2026 00:57:26 +0000 (0:00:00.704) 0:00:01.323 ******** 2026-01-05 00:59:39.070305 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.070311 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.070317 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.070322 | orchestrator | 2026-01-05 00:59:39.070328 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-05 00:59:39.070333 | orchestrator | Monday 05 January 2026 00:57:26 +0000 (0:00:00.644) 0:00:01.968 ******** 2026-01-05 00:59:39.070339 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.070346 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.070351 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.070357 | orchestrator | 2026-01-05 00:59:39.070363 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-05 00:59:39.070370 | orchestrator | Monday 05 January 2026 00:57:27 +0000 (0:00:00.352) 0:00:02.320 ******** 2026-01-05 00:59:39.070375 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.070381 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.070386 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.070393 | orchestrator | 2026-01-05 00:59:39.070399 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-05 00:59:39.070428 | orchestrator | Monday 05 January 2026 00:57:28 +0000 (0:00:00.856) 0:00:03.177 ******** 2026-01-05 00:59:39.070434 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.070440 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.070790 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.070806 | orchestrator | 2026-01-05 00:59:39.070813 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-05 00:59:39.070820 | orchestrator | Monday 05 January 2026 00:57:28 +0000 (0:00:00.309) 0:00:03.487 ******** 2026-01-05 00:59:39.070826 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.070832 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.070836 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.070840 | orchestrator | 2026-01-05 00:59:39.070844 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-05 00:59:39.070848 | orchestrator | Monday 05 January 2026 00:57:28 +0000 (0:00:00.321) 0:00:03.808 ******** 2026-01-05 00:59:39.070852 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.070856 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.070861 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.070864 | orchestrator | 2026-01-05 00:59:39.070868 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-05 00:59:39.070872 | orchestrator | Monday 05 January 2026 00:57:29 +0000 (0:00:00.316) 0:00:04.125 ******** 2026-01-05 00:59:39.070877 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.070881 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.070885 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.070889 | orchestrator | 2026-01-05 00:59:39.070893 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-05 00:59:39.070936 | orchestrator | Monday 05 January 2026 00:57:29 +0000 (0:00:00.512) 0:00:04.637 ******** 2026-01-05 00:59:39.070942 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.070946 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.070950 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.070953 | orchestrator | 2026-01-05 00:59:39.070957 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-05 00:59:39.070963 | orchestrator | Monday 05 January 2026 00:57:29 +0000 (0:00:00.306) 0:00:04.944 ******** 2026-01-05 00:59:39.070971 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:59:39.070977 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:59:39.071014 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:59:39.071022 | orchestrator | 2026-01-05 00:59:39.071029 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-05 00:59:39.071154 | orchestrator | Monday 05 January 2026 00:57:30 +0000 (0:00:00.674) 0:00:05.619 ******** 2026-01-05 00:59:39.071176 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.071180 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.071184 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.071188 | orchestrator | 2026-01-05 00:59:39.071192 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-05 00:59:39.071198 | orchestrator | Monday 05 January 2026 00:57:31 +0000 (0:00:00.460) 0:00:06.079 ******** 2026-01-05 00:59:39.071205 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:59:39.071212 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:59:39.071221 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:59:39.071226 | orchestrator | 2026-01-05 00:59:39.071232 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-05 00:59:39.071238 | orchestrator | Monday 05 January 2026 00:57:33 +0000 (0:00:02.295) 0:00:08.375 ******** 2026-01-05 00:59:39.071246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:59:39.071252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:59:39.071272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:59:39.071278 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071284 | orchestrator | 2026-01-05 00:59:39.071329 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-05 00:59:39.071338 | orchestrator | Monday 05 January 2026 00:57:34 +0000 (0:00:00.797) 0:00:09.172 ******** 2026-01-05 00:59:39.071347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.071358 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.071362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.071366 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071370 | orchestrator | 2026-01-05 00:59:39.071374 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-05 00:59:39.071378 | orchestrator | Monday 05 January 2026 00:57:35 +0000 (0:00:00.908) 0:00:10.080 ******** 2026-01-05 00:59:39.071427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.071435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.071440 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.071444 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071447 | orchestrator | 2026-01-05 00:59:39.071451 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-05 00:59:39.071455 | orchestrator | Monday 05 January 2026 00:57:35 +0000 (0:00:00.426) 0:00:10.507 ******** 2026-01-05 00:59:39.071466 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '41fd055eea1a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-05 00:57:31.801122', 'end': '2026-01-05 00:57:31.841759', 'delta': '0:00:00.040637', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['41fd055eea1a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-05 00:59:39.071472 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b08ddf0e3c32', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-05 00:57:32.639457', 'end': '2026-01-05 00:57:32.677161', 'delta': '0:00:00.037704', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b08ddf0e3c32'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-05 00:59:39.071504 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ebdcb68d00df', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-05 00:57:33.203069', 'end': '2026-01-05 00:57:33.241290', 'delta': '0:00:00.038221', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ebdcb68d00df'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-05 00:59:39.071515 | orchestrator | 2026-01-05 00:59:39.071523 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-05 00:59:39.071529 | orchestrator | Monday 05 January 2026 00:57:35 +0000 (0:00:00.186) 0:00:10.693 ******** 2026-01-05 00:59:39.071535 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.071542 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.071549 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.071555 | orchestrator | 2026-01-05 00:59:39.071561 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-05 00:59:39.071568 | orchestrator | Monday 05 January 2026 00:57:36 +0000 (0:00:00.531) 0:00:11.225 ******** 2026-01-05 00:59:39.071573 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-05 00:59:39.071580 | orchestrator | 2026-01-05 00:59:39.071586 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-05 00:59:39.071593 | orchestrator | Monday 05 January 2026 00:57:38 +0000 (0:00:01.901) 0:00:13.126 ******** 2026-01-05 00:59:39.071599 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071606 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071612 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071617 | orchestrator | 2026-01-05 00:59:39.071621 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-05 00:59:39.071625 | orchestrator | Monday 05 January 2026 00:57:38 +0000 (0:00:00.316) 0:00:13.443 ******** 2026-01-05 00:59:39.071629 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071633 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071637 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071640 | orchestrator | 2026-01-05 00:59:39.071644 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 00:59:39.071648 | orchestrator | Monday 05 January 2026 00:57:38 +0000 (0:00:00.396) 0:00:13.839 ******** 2026-01-05 00:59:39.071652 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071656 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071659 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071663 | orchestrator | 2026-01-05 00:59:39.071667 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-05 00:59:39.071671 | orchestrator | Monday 05 January 2026 00:57:39 +0000 (0:00:00.545) 0:00:14.385 ******** 2026-01-05 00:59:39.071674 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.071678 | orchestrator | 2026-01-05 00:59:39.071682 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-05 00:59:39.071693 | orchestrator | Monday 05 January 2026 00:57:39 +0000 (0:00:00.138) 0:00:14.523 ******** 2026-01-05 00:59:39.071697 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071701 | orchestrator | 2026-01-05 00:59:39.071705 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 00:59:39.071708 | orchestrator | Monday 05 January 2026 00:57:39 +0000 (0:00:00.233) 0:00:14.756 ******** 2026-01-05 00:59:39.071712 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071716 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071720 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071724 | orchestrator | 2026-01-05 00:59:39.071727 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-05 00:59:39.071731 | orchestrator | Monday 05 January 2026 00:57:40 +0000 (0:00:00.322) 0:00:15.079 ******** 2026-01-05 00:59:39.071735 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071739 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071742 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071746 | orchestrator | 2026-01-05 00:59:39.071750 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-05 00:59:39.071754 | orchestrator | Monday 05 January 2026 00:57:40 +0000 (0:00:00.343) 0:00:15.422 ******** 2026-01-05 00:59:39.071762 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071766 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071770 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071774 | orchestrator | 2026-01-05 00:59:39.071778 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-05 00:59:39.071782 | orchestrator | Monday 05 January 2026 00:57:40 +0000 (0:00:00.417) 0:00:15.839 ******** 2026-01-05 00:59:39.071786 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071792 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071798 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071804 | orchestrator | 2026-01-05 00:59:39.071814 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-05 00:59:39.071822 | orchestrator | Monday 05 January 2026 00:57:41 +0000 (0:00:00.289) 0:00:16.129 ******** 2026-01-05 00:59:39.071828 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071835 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071841 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071848 | orchestrator | 2026-01-05 00:59:39.071854 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-05 00:59:39.071859 | orchestrator | Monday 05 January 2026 00:57:41 +0000 (0:00:00.290) 0:00:16.419 ******** 2026-01-05 00:59:39.071865 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071871 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071877 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071909 | orchestrator | 2026-01-05 00:59:39.071917 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-05 00:59:39.071923 | orchestrator | Monday 05 January 2026 00:57:41 +0000 (0:00:00.294) 0:00:16.714 ******** 2026-01-05 00:59:39.071929 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.071935 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.071941 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.071948 | orchestrator | 2026-01-05 00:59:39.071956 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-05 00:59:39.071962 | orchestrator | Monday 05 January 2026 00:57:42 +0000 (0:00:00.435) 0:00:17.149 ******** 2026-01-05 00:59:39.071971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c0354e6--1633--54b4--ae3c--130b25b2cb6c-osd--block--3c0354e6--1633--54b4--ae3c--130b25b2cb6c', 'dm-uuid-LVM-1bNQGFvidc8nrkpPsYOdfDFIHYrFQGDVNYadW2DrsZLcIpedVCQWvwA5S76TEs8y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0807b7d--156a--51e9--a1ef--1ae613918df1-osd--block--a0807b7d--156a--51e9--a1ef--1ae613918df1', 'dm-uuid-LVM-tSTwV2iW4LjCTxZWgGeenUx76m7JqjXYZ2zcK3DpzSyV2V5mf7hYpMGRwS0AMSC3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3c0354e6--1633--54b4--ae3c--130b25b2cb6c-osd--block--3c0354e6--1633--54b4--ae3c--130b25b2cb6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wCF7O2-hVZ1-bGfi-mfQ0-cosE-pNsg-J8TXhC', 'scsi-0QEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20', 'scsi-SQEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4-osd--block--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4', 'dm-uuid-LVM-50FqfdBqQUcdg50l79EdQUKZYdvcLXdCBREro9BdYBXi8HBxSPBWBHTTiHZOlj0n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a0807b7d--156a--51e9--a1ef--1ae613918df1-osd--block--a0807b7d--156a--51e9--a1ef--1ae613918df1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-R41xO0-GZQJ-UybA-uIr2-kVN3-mGCm-2mGCPc', 'scsi-0QEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2', 'scsi-SQEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7959794c--cc9c--59d9--9b66--2faefa464ed4-osd--block--7959794c--cc9c--59d9--9b66--2faefa464ed4', 'dm-uuid-LVM-UK400EAe7rHq5oqlR20ULRKqz452MV7xjqxw9yWYFtxLz3ceifB7e1DtGshHklZH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8', 'scsi-SQEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072266 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.072270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part1', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part14', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part15', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part16', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4-osd--block--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mjpc8L-T1wk-iEqY-Gz09-7Sdg-NV65-N5qSru', 'scsi-0QEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1', 'scsi-SQEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7959794c--cc9c--59d9--9b66--2faefa464ed4-osd--block--7959794c--cc9c--59d9--9b66--2faefa464ed4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HAdYPE-BZe5-AHeR-h3Wm-6WWQ-jTlU-5natTC', 'scsi-0QEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613', 'scsi-SQEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763', 'scsi-SQEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072318 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.072328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1631feb6--d96c--5a43--89dd--a558edd73d68-osd--block--1631feb6--d96c--5a43--89dd--a558edd73d68', 'dm-uuid-LVM-WKe72EsaKn2scALuG4mViXkQfzDrjkxxfQ3uecey7fXGvXJLFsqzCQ4cHhvUZlrO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c322448e--6042--58d0--bdfa--5021630018c9-osd--block--c322448e--6042--58d0--bdfa--5021630018c9', 'dm-uuid-LVM-CtcQh9shLksalAwi1IDQOa7qdl8NvgvDeTgTlyxw1rg0IE7jAYC9SABwh2bAeub6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:59:39.072423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part1', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part14', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part15', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part16', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1631feb6--d96c--5a43--89dd--a558edd73d68-osd--block--1631feb6--d96c--5a43--89dd--a558edd73d68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yyCkfM-Sg2M-Ic4d-BhS3-Esz0-MnBo-U5zR0u', 'scsi-0QEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421', 'scsi-SQEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c322448e--6042--58d0--bdfa--5021630018c9-osd--block--c322448e--6042--58d0--bdfa--5021630018c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FdWrmS-YAjp-4MMF-PHJE-vNPl-hPln-goRiyk', 'scsi-0QEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678', 'scsi-SQEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a', 'scsi-SQEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:59:39.072475 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.072483 | orchestrator | 2026-01-05 00:59:39.072490 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-05 00:59:39.072496 | orchestrator | Monday 05 January 2026 00:57:42 +0000 (0:00:00.566) 0:00:17.716 ******** 2026-01-05 00:59:39.072504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c0354e6--1633--54b4--ae3c--130b25b2cb6c-osd--block--3c0354e6--1633--54b4--ae3c--130b25b2cb6c', 'dm-uuid-LVM-1bNQGFvidc8nrkpPsYOdfDFIHYrFQGDVNYadW2DrsZLcIpedVCQWvwA5S76TEs8y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0807b7d--156a--51e9--a1ef--1ae613918df1-osd--block--a0807b7d--156a--51e9--a1ef--1ae613918df1', 'dm-uuid-LVM-tSTwV2iW4LjCTxZWgGeenUx76m7JqjXYZ2zcK3DpzSyV2V5mf7hYpMGRwS0AMSC3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072554 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4-osd--block--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4', 'dm-uuid-LVM-50FqfdBqQUcdg50l79EdQUKZYdvcLXdCBREro9BdYBXi8HBxSPBWBHTTiHZOlj0n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7959794c--cc9c--59d9--9b66--2faefa464ed4-osd--block--7959794c--cc9c--59d9--9b66--2faefa464ed4', 'dm-uuid-LVM-UK400EAe7rHq5oqlR20ULRKqz452MV7xjqxw9yWYFtxLz3ceifB7e1DtGshHklZH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072585 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_a095b6dd-cfa4-45ca-9d86-c84b8a7e4f6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3c0354e6--1633--54b4--ae3c--130b25b2cb6c-osd--block--3c0354e6--1633--54b4--ae3c--130b25b2cb6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wCF7O2-hVZ1-bGfi-mfQ0-cosE-pNsg-J8TXhC', 'scsi-0QEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20', 'scsi-SQEMU_QEMU_HARDDISK_eef1532f-ab8b-4fa5-967d-60adcf1e7a20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a0807b7d--156a--51e9--a1ef--1ae613918df1-osd--block--a0807b7d--156a--51e9--a1ef--1ae613918df1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-R41xO0-GZQJ-UybA-uIr2-kVN3-mGCm-2mGCPc', 'scsi-0QEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2', 'scsi-SQEMU_QEMU_HARDDISK_200a53d9-75f2-4262-8bd3-fb85b57756f2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8', 'scsi-SQEMU_QEMU_HARDDISK_5b0c3dba-a10f-46cc-b603-e7d957ac37b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072757 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072778 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072785 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072791 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part1', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part14', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part15', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part16', 'scsi-SQEMU_QEMU_HARDDISK_64a819d6-a948-46bf-979e-0c123b5ffe57-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4-osd--block--f1b84f59--e4b7--5f9e--a7e5--ba7b4020d7e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mjpc8L-T1wk-iEqY-Gz09-7Sdg-NV65-N5qSru', 'scsi-0QEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1', 'scsi-SQEMU_QEMU_HARDDISK_a4ccfa9c-aadc-4d61-bea8-541b8c29e5f1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072835 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.072842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7959794c--cc9c--59d9--9b66--2faefa464ed4-osd--block--7959794c--cc9c--59d9--9b66--2faefa464ed4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HAdYPE-BZe5-AHeR-h3Wm-6WWQ-jTlU-5natTC', 'scsi-0QEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613', 'scsi-SQEMU_QEMU_HARDDISK_92e48aa1-1628-4f72-a210-d4a4cd9ae613'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072859 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763', 'scsi-SQEMU_QEMU_HARDDISK_f5154273-2305-4d04-879f-ade05dd05763'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1631feb6--d96c--5a43--89dd--a558edd73d68-osd--block--1631feb6--d96c--5a43--89dd--a558edd73d68', 'dm-uuid-LVM-WKe72EsaKn2scALuG4mViXkQfzDrjkxxfQ3uecey7fXGvXJLFsqzCQ4cHhvUZlrO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072884 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.072892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c322448e--6042--58d0--bdfa--5021630018c9-osd--block--c322448e--6042--58d0--bdfa--5021630018c9', 'dm-uuid-LVM-CtcQh9shLksalAwi1IDQOa7qdl8NvgvDeTgTlyxw1rg0IE7jAYC9SABwh2bAeub6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072933 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072940 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part1', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part14', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part15', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part16', 'scsi-SQEMU_QEMU_HARDDISK_177d06a0-8f03-40a5-8e33-2819c29e72ab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.072979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1631feb6--d96c--5a43--89dd--a558edd73d68-osd--block--1631feb6--d96c--5a43--89dd--a558edd73d68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yyCkfM-Sg2M-Ic4d-BhS3-Esz0-MnBo-U5zR0u', 'scsi-0QEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421', 'scsi-SQEMU_QEMU_HARDDISK_67556553-b44f-4ecf-b7ec-7000501d4421'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.073006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c322448e--6042--58d0--bdfa--5021630018c9-osd--block--c322448e--6042--58d0--bdfa--5021630018c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FdWrmS-YAjp-4MMF-PHJE-vNPl-hPln-goRiyk', 'scsi-0QEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678', 'scsi-SQEMU_QEMU_HARDDISK_75b952fb-92c7-4e92-8330-2435d2a1b678'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.073025 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a', 'scsi-SQEMU_QEMU_HARDDISK_7205c955-31fc-4c08-90c1-5dd24967146a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.073033 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:59:39.073037 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.073041 | orchestrator | 2026-01-05 00:59:39.073045 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-05 00:59:39.073050 | orchestrator | Monday 05 January 2026 00:57:43 +0000 (0:00:00.584) 0:00:18.300 ******** 2026-01-05 00:59:39.073054 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.073058 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.073063 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.073067 | orchestrator | 2026-01-05 00:59:39.073071 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-05 00:59:39.073075 | orchestrator | Monday 05 January 2026 00:57:44 +0000 (0:00:00.725) 0:00:19.026 ******** 2026-01-05 00:59:39.073080 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.073083 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.073087 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.073091 | orchestrator | 2026-01-05 00:59:39.073095 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 00:59:39.073099 | orchestrator | Monday 05 January 2026 00:57:44 +0000 (0:00:00.513) 0:00:19.540 ******** 2026-01-05 00:59:39.073103 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.073107 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.073110 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.073114 | orchestrator | 2026-01-05 00:59:39.073118 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 00:59:39.073122 | orchestrator | Monday 05 January 2026 00:57:45 +0000 (0:00:00.685) 0:00:20.225 ******** 2026-01-05 00:59:39.073126 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073130 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.073134 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.073143 | orchestrator | 2026-01-05 00:59:39.073147 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 00:59:39.073151 | orchestrator | Monday 05 January 2026 00:57:45 +0000 (0:00:00.315) 0:00:20.540 ******** 2026-01-05 00:59:39.073155 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073159 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.073162 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.073166 | orchestrator | 2026-01-05 00:59:39.073171 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 00:59:39.073175 | orchestrator | Monday 05 January 2026 00:57:45 +0000 (0:00:00.437) 0:00:20.977 ******** 2026-01-05 00:59:39.073179 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073182 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.073186 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.073190 | orchestrator | 2026-01-05 00:59:39.073194 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-05 00:59:39.073198 | orchestrator | Monday 05 January 2026 00:57:46 +0000 (0:00:00.542) 0:00:21.520 ******** 2026-01-05 00:59:39.073202 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-05 00:59:39.073206 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-05 00:59:39.073211 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-05 00:59:39.073214 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-05 00:59:39.073218 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-05 00:59:39.073222 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-05 00:59:39.073226 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-05 00:59:39.073230 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-05 00:59:39.073234 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-05 00:59:39.073238 | orchestrator | 2026-01-05 00:59:39.073242 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-05 00:59:39.073246 | orchestrator | Monday 05 January 2026 00:57:47 +0000 (0:00:00.853) 0:00:22.373 ******** 2026-01-05 00:59:39.073250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:59:39.073255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:59:39.073259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:59:39.073262 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 00:59:39.073273 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 00:59:39.073277 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 00:59:39.073281 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.073284 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 00:59:39.073288 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 00:59:39.073292 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 00:59:39.073296 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.073300 | orchestrator | 2026-01-05 00:59:39.073304 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-05 00:59:39.073308 | orchestrator | Monday 05 January 2026 00:57:47 +0000 (0:00:00.386) 0:00:22.760 ******** 2026-01-05 00:59:39.073313 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:59:39.073317 | orchestrator | 2026-01-05 00:59:39.073321 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 00:59:39.073327 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:00.734) 0:00:23.495 ******** 2026-01-05 00:59:39.073334 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073338 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.073342 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.073350 | orchestrator | 2026-01-05 00:59:39.073354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 00:59:39.073358 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:00.334) 0:00:23.829 ******** 2026-01-05 00:59:39.073361 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073365 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.073369 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.073373 | orchestrator | 2026-01-05 00:59:39.073377 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 00:59:39.073380 | orchestrator | Monday 05 January 2026 00:57:49 +0000 (0:00:00.315) 0:00:24.145 ******** 2026-01-05 00:59:39.073384 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073388 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.073392 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:59:39.073396 | orchestrator | 2026-01-05 00:59:39.073401 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 00:59:39.073404 | orchestrator | Monday 05 January 2026 00:57:49 +0000 (0:00:00.312) 0:00:24.457 ******** 2026-01-05 00:59:39.073408 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.073412 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.073416 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.073420 | orchestrator | 2026-01-05 00:59:39.073424 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 00:59:39.073428 | orchestrator | Monday 05 January 2026 00:57:50 +0000 (0:00:00.610) 0:00:25.068 ******** 2026-01-05 00:59:39.073431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:59:39.073435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:59:39.073439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:59:39.073443 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073447 | orchestrator | 2026-01-05 00:59:39.073451 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 00:59:39.073455 | orchestrator | Monday 05 January 2026 00:57:50 +0000 (0:00:00.398) 0:00:25.467 ******** 2026-01-05 00:59:39.073459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:59:39.073463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:59:39.073467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:59:39.073471 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073474 | orchestrator | 2026-01-05 00:59:39.073478 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 00:59:39.073482 | orchestrator | Monday 05 January 2026 00:57:50 +0000 (0:00:00.371) 0:00:25.839 ******** 2026-01-05 00:59:39.073486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:59:39.073490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:59:39.073494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:59:39.073498 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073502 | orchestrator | 2026-01-05 00:59:39.073506 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 00:59:39.073510 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:00.367) 0:00:26.207 ******** 2026-01-05 00:59:39.073514 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:59:39.073519 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:59:39.073527 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:59:39.073533 | orchestrator | 2026-01-05 00:59:39.073539 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 00:59:39.073548 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:00.329) 0:00:26.536 ******** 2026-01-05 00:59:39.073557 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 00:59:39.073563 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 00:59:39.073570 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 00:59:39.073576 | orchestrator | 2026-01-05 00:59:39.073582 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-05 00:59:39.073593 | orchestrator | Monday 05 January 2026 00:57:52 +0000 (0:00:00.510) 0:00:27.046 ******** 2026-01-05 00:59:39.073599 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:59:39.073606 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:59:39.073613 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:59:39.073620 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 00:59:39.073631 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 00:59:39.073640 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 00:59:39.073646 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 00:59:39.073654 | orchestrator | 2026-01-05 00:59:39.073659 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-05 00:59:39.073663 | orchestrator | Monday 05 January 2026 00:57:53 +0000 (0:00:01.020) 0:00:28.067 ******** 2026-01-05 00:59:39.073667 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:59:39.073671 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:59:39.073675 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:59:39.073679 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 00:59:39.073683 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 00:59:39.073686 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 00:59:39.073696 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 00:59:39.073701 | orchestrator | 2026-01-05 00:59:39.073705 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-05 00:59:39.073710 | orchestrator | Monday 05 January 2026 00:57:55 +0000 (0:00:02.091) 0:00:30.158 ******** 2026-01-05 00:59:39.073714 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:59:39.073719 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:59:39.073726 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-05 00:59:39.073731 | orchestrator | 2026-01-05 00:59:39.073737 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-05 00:59:39.073742 | orchestrator | Monday 05 January 2026 00:57:55 +0000 (0:00:00.426) 0:00:30.585 ******** 2026-01-05 00:59:39.073749 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:59:39.073758 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:59:39.073764 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:59:39.073769 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:59:39.073783 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:59:39.073789 | orchestrator | 2026-01-05 00:59:39.073794 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-05 00:59:39.073800 | orchestrator | Monday 05 January 2026 00:58:43 +0000 (0:00:47.870) 0:01:18.456 ******** 2026-01-05 00:59:39.073807 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073813 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073819 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073825 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073830 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073836 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073841 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-05 00:59:39.073847 | orchestrator | 2026-01-05 00:59:39.073853 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-05 00:59:39.073860 | orchestrator | Monday 05 January 2026 00:59:08 +0000 (0:00:24.798) 0:01:43.254 ******** 2026-01-05 00:59:39.073866 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073873 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073885 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073893 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073900 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073907 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073914 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:59:39.073920 | orchestrator | 2026-01-05 00:59:39.073926 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-05 00:59:39.073933 | orchestrator | Monday 05 January 2026 00:59:20 +0000 (0:00:11.872) 0:01:55.127 ******** 2026-01-05 00:59:39.073939 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073943 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:59:39.073946 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:59:39.073950 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073954 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:59:39.073963 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:59:39.073968 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.073972 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:59:39.073976 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:59:39.073980 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.074006 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:59:39.074011 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:59:39.074063 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.074082 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:59:39.074090 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:59:39.074096 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:59:39.074103 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:59:39.074110 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:59:39.074117 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-05 00:59:39.074124 | orchestrator | 2026-01-05 00:59:39.074131 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:59:39.074139 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-05 00:59:39.074147 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-05 00:59:39.074156 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-05 00:59:39.074163 | orchestrator | 2026-01-05 00:59:39.074170 | orchestrator | 2026-01-05 00:59:39.074177 | orchestrator | 2026-01-05 00:59:39.074185 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:59:39.074192 | orchestrator | Monday 05 January 2026 00:59:37 +0000 (0:00:17.538) 0:02:12.665 ******** 2026-01-05 00:59:39.074199 | orchestrator | =============================================================================== 2026-01-05 00:59:39.074207 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.87s 2026-01-05 00:59:39.074212 | orchestrator | generate keys ---------------------------------------------------------- 24.80s 2026-01-05 00:59:39.074216 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.54s 2026-01-05 00:59:39.074220 | orchestrator | get keys from monitors ------------------------------------------------- 11.87s 2026-01-05 00:59:39.074225 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.30s 2026-01-05 00:59:39.074232 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.09s 2026-01-05 00:59:39.074238 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.90s 2026-01-05 00:59:39.074246 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2026-01-05 00:59:39.074256 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.91s 2026-01-05 00:59:39.074263 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2026-01-05 00:59:39.074269 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2026-01-05 00:59:39.074274 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.80s 2026-01-05 00:59:39.074281 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2026-01-05 00:59:39.074287 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-01-05 00:59:39.074293 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.70s 2026-01-05 00:59:39.074305 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2026-01-05 00:59:39.074311 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2026-01-05 00:59:39.074318 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2026-01-05 00:59:39.074324 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.61s 2026-01-05 00:59:39.074330 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2026-01-05 00:59:39.075273 | orchestrator | 2026-01-05 00:59:39 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:39.075351 | orchestrator | 2026-01-05 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:42.108948 | orchestrator | 2026-01-05 00:59:42 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:42.109893 | orchestrator | 2026-01-05 00:59:42 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 00:59:42.110608 | orchestrator | 2026-01-05 00:59:42 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:42.111155 | orchestrator | 2026-01-05 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:45.174105 | orchestrator | 2026-01-05 00:59:45 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:45.174249 | orchestrator | 2026-01-05 00:59:45 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 00:59:45.174260 | orchestrator | 2026-01-05 00:59:45 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:45.174267 | orchestrator | 2026-01-05 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:48.192041 | orchestrator | 2026-01-05 00:59:48 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:48.195560 | orchestrator | 2026-01-05 00:59:48 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 00:59:48.196761 | orchestrator | 2026-01-05 00:59:48 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:48.196793 | orchestrator | 2026-01-05 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:51.247643 | orchestrator | 2026-01-05 00:59:51 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:51.250372 | orchestrator | 2026-01-05 00:59:51 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 00:59:51.253378 | orchestrator | 2026-01-05 00:59:51 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:51.253466 | orchestrator | 2026-01-05 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:54.288226 | orchestrator | 2026-01-05 00:59:54 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:54.288919 | orchestrator | 2026-01-05 00:59:54 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 00:59:54.289855 | orchestrator | 2026-01-05 00:59:54 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:54.289893 | orchestrator | 2026-01-05 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:57.325708 | orchestrator | 2026-01-05 00:59:57 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 00:59:57.326826 | orchestrator | 2026-01-05 00:59:57 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 00:59:57.330374 | orchestrator | 2026-01-05 00:59:57 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 00:59:57.330459 | orchestrator | 2026-01-05 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:00.366143 | orchestrator | 2026-01-05 01:00:00 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:00.366848 | orchestrator | 2026-01-05 01:00:00 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 01:00:00.367741 | orchestrator | 2026-01-05 01:00:00 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:00.367779 | orchestrator | 2026-01-05 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:03.418510 | orchestrator | 2026-01-05 01:00:03 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:03.422428 | orchestrator | 2026-01-05 01:00:03 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 01:00:03.423032 | orchestrator | 2026-01-05 01:00:03 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:03.423159 | orchestrator | 2026-01-05 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:06.473508 | orchestrator | 2026-01-05 01:00:06 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:06.474801 | orchestrator | 2026-01-05 01:00:06 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 01:00:06.476106 | orchestrator | 2026-01-05 01:00:06 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:06.476140 | orchestrator | 2026-01-05 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:09.518410 | orchestrator | 2026-01-05 01:00:09 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:09.521451 | orchestrator | 2026-01-05 01:00:09 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 01:00:09.523905 | orchestrator | 2026-01-05 01:00:09 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:09.525259 | orchestrator | 2026-01-05 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:12.563976 | orchestrator | 2026-01-05 01:00:12 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:12.564029 | orchestrator | 2026-01-05 01:00:12 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 01:00:12.565790 | orchestrator | 2026-01-05 01:00:12 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:12.565819 | orchestrator | 2026-01-05 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:15.615761 | orchestrator | 2026-01-05 01:00:15 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:15.616303 | orchestrator | 2026-01-05 01:00:15 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state STARTED 2026-01-05 01:00:15.618619 | orchestrator | 2026-01-05 01:00:15 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:15.618674 | orchestrator | 2026-01-05 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:18.662955 | orchestrator | 2026-01-05 01:00:18 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:18.664718 | orchestrator | 2026-01-05 01:00:18 | INFO  | Task 7d33aef4-e959-46b2-950b-f150b38984ad is in state SUCCESS 2026-01-05 01:00:18.667762 | orchestrator | 2026-01-05 01:00:18 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:18.668338 | orchestrator | 2026-01-05 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:21.710059 | orchestrator | 2026-01-05 01:00:21 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:21.710207 | orchestrator | 2026-01-05 01:00:21 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:21.711679 | orchestrator | 2026-01-05 01:00:21 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:21.711724 | orchestrator | 2026-01-05 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:24.763631 | orchestrator | 2026-01-05 01:00:24 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:24.768102 | orchestrator | 2026-01-05 01:00:24 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:24.769946 | orchestrator | 2026-01-05 01:00:24 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:24.770039 | orchestrator | 2026-01-05 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:27.806990 | orchestrator | 2026-01-05 01:00:27 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:27.809278 | orchestrator | 2026-01-05 01:00:27 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:27.812850 | orchestrator | 2026-01-05 01:00:27 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:27.813071 | orchestrator | 2026-01-05 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:30.860016 | orchestrator | 2026-01-05 01:00:30 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:30.861141 | orchestrator | 2026-01-05 01:00:30 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:30.861704 | orchestrator | 2026-01-05 01:00:30 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:30.861789 | orchestrator | 2026-01-05 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:33.901142 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:33.904702 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:33.907173 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state STARTED 2026-01-05 01:00:33.907223 | orchestrator | 2026-01-05 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:36.957356 | orchestrator | 2026-01-05 01:00:36 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:00:36.957420 | orchestrator | 2026-01-05 01:00:36 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:36.957438 | orchestrator | 2026-01-05 01:00:36 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:36.958506 | orchestrator | 2026-01-05 01:00:36 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:00:36.959666 | orchestrator | 2026-01-05 01:00:36 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:00:36.965115 | orchestrator | 2026-01-05 01:00:36.965163 | orchestrator | 2026-01-05 01:00:36.965168 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-05 01:00:36.965173 | orchestrator | 2026-01-05 01:00:36.965177 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-05 01:00:36.965181 | orchestrator | Monday 05 January 2026 00:59:42 +0000 (0:00:00.162) 0:00:00.162 ******** 2026-01-05 01:00:36.965185 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-05 01:00:36.965191 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965195 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965199 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:00:36.965202 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965206 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-05 01:00:36.965221 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-05 01:00:36.965225 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:00:36.965229 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-05 01:00:36.965233 | orchestrator | 2026-01-05 01:00:36.965237 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-05 01:00:36.965241 | orchestrator | Monday 05 January 2026 00:59:47 +0000 (0:00:05.008) 0:00:05.170 ******** 2026-01-05 01:00:36.965245 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-05 01:00:36.965249 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965252 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965256 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:00:36.965260 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965263 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-05 01:00:36.965269 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-05 01:00:36.965275 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:00:36.965281 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-05 01:00:36.965287 | orchestrator | 2026-01-05 01:00:36.965293 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-05 01:00:36.965300 | orchestrator | Monday 05 January 2026 00:59:52 +0000 (0:00:04.783) 0:00:09.954 ******** 2026-01-05 01:00:36.965307 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 01:00:36.965312 | orchestrator | 2026-01-05 01:00:36.965318 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-05 01:00:36.965324 | orchestrator | Monday 05 January 2026 00:59:53 +0000 (0:00:01.029) 0:00:10.983 ******** 2026-01-05 01:00:36.965329 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-05 01:00:36.965335 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965349 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965355 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:00:36.965361 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965366 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-05 01:00:36.965372 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-05 01:00:36.965377 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:00:36.965426 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-05 01:00:36.965432 | orchestrator | 2026-01-05 01:00:36.965438 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-05 01:00:36.965478 | orchestrator | Monday 05 January 2026 01:00:07 +0000 (0:00:14.056) 0:00:25.039 ******** 2026-01-05 01:00:36.965487 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-05 01:00:36.965494 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-05 01:00:36.965508 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-05 01:00:36.965514 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-05 01:00:36.965532 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-05 01:00:36.965785 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-05 01:00:36.965795 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-05 01:00:36.965801 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-05 01:00:36.965807 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-05 01:00:36.965812 | orchestrator | 2026-01-05 01:00:36.965818 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-05 01:00:36.965824 | orchestrator | Monday 05 January 2026 01:00:10 +0000 (0:00:03.212) 0:00:28.252 ******** 2026-01-05 01:00:36.965831 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-05 01:00:36.965836 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965842 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965848 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:00:36.965854 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:00:36.965859 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-05 01:00:36.965866 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-05 01:00:36.965873 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:00:36.965940 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-05 01:00:36.965947 | orchestrator | 2026-01-05 01:00:36.965954 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:00:36.966092 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:00:36.966101 | orchestrator | 2026-01-05 01:00:36.966156 | orchestrator | 2026-01-05 01:00:36.966276 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:00:36.966281 | orchestrator | Monday 05 January 2026 01:00:18 +0000 (0:00:07.220) 0:00:35.472 ******** 2026-01-05 01:00:36.966285 | orchestrator | =============================================================================== 2026-01-05 01:00:36.966288 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.06s 2026-01-05 01:00:36.966292 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.22s 2026-01-05 01:00:36.966296 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.01s 2026-01-05 01:00:36.966299 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.78s 2026-01-05 01:00:36.966303 | orchestrator | Check if target directories exist --------------------------------------- 3.21s 2026-01-05 01:00:36.966307 | orchestrator | Create share directory -------------------------------------------------- 1.03s 2026-01-05 01:00:36.966311 | orchestrator | 2026-01-05 01:00:36.966314 | orchestrator | 2026-01-05 01:00:36.966318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:00:36.966322 | orchestrator | 2026-01-05 01:00:36.966325 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:00:36.966329 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:00.264) 0:00:00.265 ******** 2026-01-05 01:00:36.966333 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:36.966337 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:36.966341 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:36.966351 | orchestrator | 2026-01-05 01:00:36.966355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:00:36.966358 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:00.294) 0:00:00.559 ******** 2026-01-05 01:00:36.966364 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-05 01:00:36.966370 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-05 01:00:36.966415 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-05 01:00:36.966425 | orchestrator | 2026-01-05 01:00:36.966431 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-05 01:00:36.966438 | orchestrator | 2026-01-05 01:00:36.966446 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:36.966453 | orchestrator | Monday 05 January 2026 00:59:30 +0000 (0:00:00.454) 0:00:01.014 ******** 2026-01-05 01:00:36.966457 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:00:36.966461 | orchestrator | 2026-01-05 01:00:36.966465 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-05 01:00:36.966469 | orchestrator | Monday 05 January 2026 00:59:30 +0000 (0:00:00.553) 0:00:01.567 ******** 2026-01-05 01:00:36.966519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.966532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.966540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.966560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966755 | orchestrator | 2026-01-05 01:00:36.966759 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-05 01:00:36.966764 | orchestrator | Monday 05 January 2026 00:59:32 +0000 (0:00:02.006) 0:00:03.574 ******** 2026-01-05 01:00:36.966768 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.966772 | orchestrator | 2026-01-05 01:00:36.966775 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-05 01:00:36.966779 | orchestrator | Monday 05 January 2026 00:59:32 +0000 (0:00:00.137) 0:00:03.712 ******** 2026-01-05 01:00:36.966783 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.966787 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.966790 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.966794 | orchestrator | 2026-01-05 01:00:36.966798 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-05 01:00:36.966802 | orchestrator | Monday 05 January 2026 00:59:33 +0000 (0:00:00.482) 0:00:04.194 ******** 2026-01-05 01:00:36.966806 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:00:36.966809 | orchestrator | 2026-01-05 01:00:36.966817 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:36.966821 | orchestrator | Monday 05 January 2026 00:59:34 +0000 (0:00:00.877) 0:00:05.071 ******** 2026-01-05 01:00:36.966825 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:00:36.966829 | orchestrator | 2026-01-05 01:00:36.966833 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-05 01:00:36.966836 | orchestrator | Monday 05 January 2026 00:59:34 +0000 (0:00:00.547) 0:00:05.619 ******** 2026-01-05 01:00:36.966855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.966860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.966868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.966874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.966976 | orchestrator | 2026-01-05 01:00:36.966986 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-05 01:00:36.966993 | orchestrator | Monday 05 January 2026 00:59:38 +0000 (0:00:03.702) 0:00:09.321 ******** 2026-01-05 01:00:36.967004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967045 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.967052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967079 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.967088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967131 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.967137 | orchestrator | 2026-01-05 01:00:36.967144 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-05 01:00:36.967149 | orchestrator | Monday 05 January 2026 00:59:39 +0000 (0:00:00.735) 0:00:10.057 ******** 2026-01-05 01:00:36.967153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/k2026-01-05 01:00:36 | INFO  | Task 1269dca1-3524-4e8a-bbbb-15a2cf4fa46b is in state SUCCESS 2026-01-05 01:00:36.967191 | orchestrator | olla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967211 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.967218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967224 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.967228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967249 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.967256 | orchestrator | 2026-01-05 01:00:36.967278 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-05 01:00:36.967290 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.977) 0:00:11.035 ******** 2026-01-05 01:00:36.967297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.967302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.967312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.967320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967421 | orchestrator | 2026-01-05 01:00:36.967427 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-05 01:00:36.967433 | orchestrator | Monday 05 January 2026 00:59:43 +0000 (0:00:03.433) 0:00:14.468 ******** 2026-01-05 01:00:36.967462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.967476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.967490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.967508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.967557 | orchestrator | 2026-01-05 01:00:36.967563 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-05 01:00:36.967569 | orchestrator | Monday 05 January 2026 00:59:49 +0000 (0:00:06.293) 0:00:20.762 ******** 2026-01-05 01:00:36.967576 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:36.967582 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:00:36.967586 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:00:36.967590 | orchestrator | 2026-01-05 01:00:36.967594 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-05 01:00:36.967598 | orchestrator | Monday 05 January 2026 00:59:51 +0000 (0:00:01.681) 0:00:22.443 ******** 2026-01-05 01:00:36.967601 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.967605 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.967609 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.967613 | orchestrator | 2026-01-05 01:00:36.967616 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-05 01:00:36.967620 | orchestrator | Monday 05 January 2026 00:59:52 +0000 (0:00:00.577) 0:00:23.020 ******** 2026-01-05 01:00:36.967624 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.967628 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.967631 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.967635 | orchestrator | 2026-01-05 01:00:36.967639 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-05 01:00:36.967643 | orchestrator | Monday 05 January 2026 00:59:52 +0000 (0:00:00.311) 0:00:23.331 ******** 2026-01-05 01:00:36.967646 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.967650 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.967654 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.967661 | orchestrator | 2026-01-05 01:00:36.967668 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-05 01:00:36.967672 | orchestrator | Monday 05 January 2026 00:59:53 +0000 (0:00:00.572) 0:00:23.904 ******** 2026-01-05 01:00:36.967688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967707 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.967715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967735 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.967755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.967763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.967770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.967776 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.967783 | orchestrator | 2026-01-05 01:00:36.967789 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:36.967796 | orchestrator | Monday 05 January 2026 00:59:53 +0000 (0:00:00.605) 0:00:24.509 ******** 2026-01-05 01:00:36.967802 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.967808 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.967815 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.967821 | orchestrator | 2026-01-05 01:00:36.967828 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-05 01:00:36.967839 | orchestrator | Monday 05 January 2026 00:59:53 +0000 (0:00:00.290) 0:00:24.800 ******** 2026-01-05 01:00:36.967845 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:00:36.967852 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:00:36.967859 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:00:36.967865 | orchestrator | 2026-01-05 01:00:36.967871 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-05 01:00:36.967893 | orchestrator | Monday 05 January 2026 00:59:55 +0000 (0:00:01.887) 0:00:26.687 ******** 2026-01-05 01:00:36.967900 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:00:36.967906 | orchestrator | 2026-01-05 01:00:36.967913 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-05 01:00:36.967923 | orchestrator | Monday 05 January 2026 00:59:57 +0000 (0:00:01.204) 0:00:27.892 ******** 2026-01-05 01:00:36.967929 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.967936 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.967942 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.967948 | orchestrator | 2026-01-05 01:00:36.967955 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-05 01:00:36.967961 | orchestrator | Monday 05 January 2026 00:59:58 +0000 (0:00:01.167) 0:00:29.060 ******** 2026-01-05 01:00:36.967968 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 01:00:36.967974 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 01:00:36.967981 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:00:36.967988 | orchestrator | 2026-01-05 01:00:36.967995 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-05 01:00:36.968002 | orchestrator | Monday 05 January 2026 00:59:59 +0000 (0:00:01.441) 0:00:30.502 ******** 2026-01-05 01:00:36.968009 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:36.968016 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:36.968023 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:36.968030 | orchestrator | 2026-01-05 01:00:36.968037 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-05 01:00:36.968044 | orchestrator | Monday 05 January 2026 01:00:00 +0000 (0:00:00.342) 0:00:30.845 ******** 2026-01-05 01:00:36.968052 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:00:36.968059 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:00:36.968066 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:00:36.968095 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:00:36.968104 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:00:36.968111 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:00:36.968118 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:00:36.968125 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:00:36.968132 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:00:36.968139 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:00:36.968146 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:00:36.968153 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:00:36.968160 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:00:36.968172 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:00:36.968179 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:00:36.968186 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:00:36.968193 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:00:36.968200 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:00:36.968207 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:00:36.968214 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:00:36.968221 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:00:36.968228 | orchestrator | 2026-01-05 01:00:36.968235 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-05 01:00:36.968242 | orchestrator | Monday 05 January 2026 01:00:09 +0000 (0:00:09.893) 0:00:40.738 ******** 2026-01-05 01:00:36.968250 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:00:36.968257 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:00:36.968264 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:00:36.968271 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:00:36.968278 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:00:36.968285 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:00:36.968292 | orchestrator | 2026-01-05 01:00:36.968299 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-01-05 01:00:36.968306 | orchestrator | Monday 05 January 2026 01:00:12 +0000 (0:00:02.823) 0:00:43.562 ******** 2026-01-05 01:00:36.968318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.968346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.968359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-05 01:00:36.968366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.968378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.968385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:36.968396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.968403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.968413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:36.968420 | orchestrator | 2026-01-05 01:00:36.968426 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-01-05 01:00:36.968432 | orchestrator | Monday 05 January 2026 01:00:15 +0000 (0:00:02.699) 0:00:46.262 ******** 2026-01-05 01:00:36.968439 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 01:00:36.968446 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:00:36.968454 | orchestrator | } 2026-01-05 01:00:36.968461 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 01:00:36.968468 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:00:36.968475 | orchestrator | } 2026-01-05 01:00:36.968482 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 01:00:36.968489 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:00:36.968496 | orchestrator | } 2026-01-05 01:00:36.968503 | orchestrator | 2026-01-05 01:00:36.968510 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 01:00:36.968517 | orchestrator | Monday 05 January 2026 01:00:15 +0000 (0:00:00.385) 0:00:46.647 ******** 2026-01-05 01:00:36.968528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.968536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.968550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.968561 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.968569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.968576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.968584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.968591 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.968601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-05 01:00:36.968618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:36.968625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:36.968632 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.968639 | orchestrator | 2026-01-05 01:00:36.968646 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:36.968653 | orchestrator | Monday 05 January 2026 01:00:16 +0000 (0:00:01.026) 0:00:47.674 ******** 2026-01-05 01:00:36.968660 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.968666 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.968673 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.968680 | orchestrator | 2026-01-05 01:00:36.968687 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-05 01:00:36.968694 | orchestrator | Monday 05 January 2026 01:00:17 +0000 (0:00:00.329) 0:00:48.003 ******** 2026-01-05 01:00:36.968701 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:36.968708 | orchestrator | 2026-01-05 01:00:36.968715 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-05 01:00:36.968722 | orchestrator | Monday 05 January 2026 01:00:19 +0000 (0:00:02.493) 0:00:50.497 ******** 2026-01-05 01:00:36.968729 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:36.968737 | orchestrator | 2026-01-05 01:00:36.968744 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-05 01:00:36.968751 | orchestrator | Monday 05 January 2026 01:00:22 +0000 (0:00:02.390) 0:00:52.888 ******** 2026-01-05 01:00:36.968758 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:36.968765 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:36.968773 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:36.968780 | orchestrator | 2026-01-05 01:00:36.968786 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-05 01:00:36.968793 | orchestrator | Monday 05 January 2026 01:00:23 +0000 (0:00:01.069) 0:00:53.957 ******** 2026-01-05 01:00:36.968800 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:36.968807 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:36.968814 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:36.968821 | orchestrator | 2026-01-05 01:00:36.968828 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-05 01:00:36.968835 | orchestrator | Monday 05 January 2026 01:00:23 +0000 (0:00:00.363) 0:00:54.321 ******** 2026-01-05 01:00:36.968842 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:36.968848 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:36.968855 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:36.968862 | orchestrator | 2026-01-05 01:00:36.968868 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-05 01:00:36.968875 | orchestrator | Monday 05 January 2026 01:00:24 +0000 (0:00:00.580) 0:00:54.901 ******** 2026-01-05 01:00:36.969021 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\n2026-01-05 01:00:25.605 INFO Loading config file at /var/lib/kolla/config_files/config.json\n2026-01-05 01:00:25.606 INFO Validating config file\n2026-01-05 01:00:25.606 INFO Kolla config strategy set to: COPY_ALWAYS\n2026-01-05 01:00:25.611 INFO Copying service configuration files\n2026-01-05 01:00:25.611 INFO Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\n2026-01-05 01:00:25.618 INFO Setting permission for /usr/bin/keystone-startup.sh\n2026-01-05 01:00:25.618 INFO Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\n2026-01-05 01:00:25.619 INFO Setting permission for /etc/keystone/keystone.conf\n2026-01-05 01:00:25.619 INFO Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\n2026-01-05 01:00:25.626 INFO Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\n2026-01-05 01:00:25.627 INFO Creating directory /var/lib/kolla/share/ca-certificates\n2026-01-05 01:00:25.627 INFO Setting permission for /var/lib/kolla/share/ca-certificates\n2026-01-05 01:00:25.627 INFO Copying /var/lib/kolla/config_files/ca-certificates/testbed.crt to /var/lib/kolla/share/ca-certificates/testbed.crt\n2026-01-05 01:00:25.628 INFO Setting permission for /var/lib/kolla/share/ca-certificates/testbed.crt\n2026-01-05 01:00:25.628 INFO Writing out command to execute\n2026-01-05 01:00:25.628 INFO Setting permission for /var/log/kolla\n2026-01-05 01:00:25.629 INFO Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n++ mkdir -p /var/log/kolla/keystone\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ chown keystone:kolla /var/log/kolla/keystone\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n++ touch /var/log/kolla/keystone/keystone.log\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n++ chown keystone:keystone /var/log/kolla/keystone/keystone.log\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n '' ]]\n++ [[ -n '' ]]\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync\n2026-01-05 01:00:34.321 1081 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:397\n2026-01-05 01:00:34.325 1081 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-05 01:00:34.325 1081 ERROR keystone Traceback (most recent call last):\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-05 01:00:34.325 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection\n2026-01-05 01:00:34.325 1081 ERROR keystone return self.pool.connect()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-05 01:00:34.325 1081 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-05 01:00:34.325 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-05 01:00:34.325 1081 ERROR keystone rec = pool._do_get()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-05 01:00:34.325 1081 ERROR keystone with util.safe_reraise():\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-05 01:00:34.325 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-05 01:00:34.325 1081 ERROR keystone return self._create_connection()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-05 01:00:34.325 1081 ERROR keystone return _ConnectionRecord(self)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-05 01:00:34.325 1081 ERROR keystone self.__connect()\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-05 01:00:34.325 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-05 01:00:34.325 1081 ERROR keystone self(*args, **kw)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-05 01:00:34.325 1081 ERROR keystone fn(*args, **kw)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go\n2026-01-05 01:00:34.325 1081 ERROR keystone return once_fn(*arg, **kw)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect\n2026-01-05 01:00:34.325 1081 ERROR keystone dialect.initialize(c)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize\n2026-01-05 01:00:34.325 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize\n2026-01-05 01:00:34.325 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level\n2026-01-05 01:00:34.325 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level\n2026-01-05 01:00:34.325 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-05 01:00:34.325 1081 ERROR keystone result = self._query(query)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-05 01:00:34.325 1081 ERROR keystone conn.query(q)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-05 01:00:34.325 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-05 01:00:34.325 1081 ERROR keystone result.read()\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-05 01:00:34.325 1081 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-05 01:00:34.325 1081 ERROR keystone packet.raise_for_error()\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-05 01:00:34.325 1081 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-05 01:00:34.325 1081 ERROR keystone raise errorclass(errno, errval)\n2026-01-05 01:00:34.325 1081 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-05 01:00:34.325 1081 ERROR keystone \n2026-01-05 01:00:34.325 1081 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-05 01:00:34.325 1081 ERROR keystone \n2026-01-05 01:00:34.325 1081 ERROR keystone Traceback (most recent call last):\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-05 01:00:34.325 1081 ERROR keystone sys.exit(main())\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-05 01:00:34.325 1081 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1727, in main\n2026-01-05 01:00:34.325 1081 ERROR keystone CONF.command.cmd_class.main()\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 492, in main\n2026-01-05 01:00:34.325 1081 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 321, in offline_sync_database_to_version\n2026-01-05 01:00:34.325 1081 ERROR keystone _db_sync(engine=engine)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 210, in _db_sync\n2026-01-05 01:00:34.325 1081 ERROR keystone with sql.session_for_write() as session:\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-05 01:00:34.325 1081 ERROR keystone return next(self.gen)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1199, in _transaction_scope\n2026-01-05 01:00:34.325 1081 ERROR keystone with current._produce_block(\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-05 01:00:34.325 1081 ERROR keystone return next(self.gen)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 841, in _session\n2026-01-05 01:00:34.325 1081 ERROR keystone self.session = self.factory._create_session(\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 459, in _create_session\n2026-01-05 01:00:34.325 1081 ERROR keystone self._start()\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 530, in _start\n2026-01-05 01:00:34.325 1081 ERROR keystone self._setup_for_connection(\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 647, in _setup_for_connection\n2026-01-05 01:00:34.325 1081 ERROR keystone engine = engines.create_engine(\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-05 01:00:34.325 1081 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 271, in create_engine\n2026-01-05 01:00:34.325 1081 ERROR keystone _test_connection(engine_event_target, max_retries, retry_interval)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 169, in _test_connection\n2026-01-05 01:00:34.325 1081 ERROR keystone conn = engine.connect()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3274, in connect\n2026-01-05 01:00:34.325 1081 ERROR keystone return self._connection_cls(self)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-05 01:00:34.325 1081 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2436, in _handle_dbapi_exception_noconnection\n2026-01-05 01:00:34.325 1081 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-05 01:00:34.325 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection\n2026-01-05 01:00:34.325 1081 ERROR keystone return self.pool.connect()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-05 01:00:34.325 1081 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-05 01:00:34.325 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-05 01:00:34.325 1081 ERROR keystone rec = pool._do_get()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-05 01:00:34.325 1081 ERROR keystone with util.safe_reraise():\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-05 01:00:34.325 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-05 01:00:34.325 1081 ERROR keystone return self._create_connection()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-05 01:00:34.325 1081 ERROR keystone return _ConnectionRecord(self)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-05 01:00:34.325 1081 ERROR keystone self.__connect()\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-05 01:00:34.325 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-05 01:00:34.325 1081 ERROR keystone self(*args, **kw)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-05 01:00:34.325 1081 ERROR keystone fn(*args, **kw)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go\n2026-01-05 01:00:34.325 1081 ERROR keystone return once_fn(*arg, **kw)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect\n2026-01-05 01:00:34.325 1081 ERROR keystone dialect.initialize(c)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize\n2026-01-05 01:00:34.325 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize\n2026-01-05 01:00:34.325 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level\n2026-01-05 01:00:34.325 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level\n2026-01-05 01:00:34.325 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-05 01:00:34.325 1081 ERROR keystone result = self._query(query)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-05 01:00:34.325 1081 ERROR keystone conn.query(q)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-05 01:00:34.325 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-05 01:00:34.325 1081 ERROR keystone result.read()\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-05 01:00:34.325 1081 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-05 01:00:34.325 1081 ERROR keystone packet.raise_for_error()\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-05 01:00:34.325 1081 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-05 01:00:34.325 1081 ERROR keystone raise errorclass(errno, errval)\n2026-01-05 01:00:34.325 1081 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-05 01:00:34.325 1081 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-05 01:00:34.325 1081 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "2026-01-05 01:00:25.605 INFO Loading config file at /var/lib/kolla/config_files/config.json", "2026-01-05 01:00:25.606 INFO Validating config file", "2026-01-05 01:00:25.606 INFO Kolla config strategy set to: COPY_ALWAYS", "2026-01-05 01:00:25.611 INFO Copying service configuration files", "2026-01-05 01:00:25.611 INFO Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "2026-01-05 01:00:25.618 INFO Setting permission for /usr/bin/keystone-startup.sh", "2026-01-05 01:00:25.618 INFO Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "2026-01-05 01:00:25.619 INFO Setting permission for /etc/keystone/keystone.conf", "2026-01-05 01:00:25.619 INFO Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "2026-01-05 01:00:25.626 INFO Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "2026-01-05 01:00:25.627 INFO Creating directory /var/lib/kolla/share/ca-certificates", "2026-01-05 01:00:25.627 INFO Setting permission for /var/lib/kolla/share/ca-certificates", "2026-01-05 01:00:25.627 INFO Copying /var/lib/kolla/config_files/ca-certificates/testbed.crt to /var/lib/kolla/share/ca-certificates/testbed.crt", "2026-01-05 01:00:25.628 INFO Setting permission for /var/lib/kolla/share/ca-certificates/testbed.crt", "2026-01-05 01:00:25.628 INFO Writing out command to execute", "2026-01-05 01:00:25.628 INFO Setting permission for /var/log/kolla", "2026-01-05 01:00:25.629 INFO Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "++ mkdir -p /var/log/kolla/keystone", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ chown keystone:kolla /var/log/kolla/keystone", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "++ touch /var/log/kolla/keystone/keystone.log", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n '' ]]", "++ [[ -n '' ]]", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync", "2026-01-05 01:00:34.321 1081 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:397", "2026-01-05 01:00:34.325 1081 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-05 01:00:34.325 1081 ERROR keystone Traceback (most recent call last):", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-05 01:00:34.325 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection", "2026-01-05 01:00:34.325 1081 ERROR keystone return self.pool.connect()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-05 01:00:34.325 1081 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-05 01:00:34.325 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-05 01:00:34.325 1081 ERROR keystone rec = pool._do_get()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-05 01:00:34.325 1081 ERROR keystone with util.safe_reraise():", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-05 01:00:34.325 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-05 01:00:34.325 1081 ERROR keystone return self._create_connection()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-05 01:00:34.325 1081 ERROR keystone return _ConnectionRecord(self)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-05 01:00:34.325 1081 ERROR keystone self.__connect()", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-05 01:00:34.325 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-05 01:00:34.325 1081 ERROR keystone self(*args, **kw)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-05 01:00:34.325 1081 ERROR keystone fn(*args, **kw)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go", "2026-01-05 01:00:34.325 1081 ERROR keystone return once_fn(*arg, **kw)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect", "2026-01-05 01:00:34.325 1081 ERROR keystone dialect.initialize(c)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize", "2026-01-05 01:00:34.325 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize", "2026-01-05 01:00:34.325 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level", "2026-01-05 01:00:34.325 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level", "2026-01-05 01:00:34.325 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-05 01:00:34.325 1081 ERROR keystone result = self._query(query)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-05 01:00:34.325 1081 ERROR keystone conn.query(q)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-05 01:00:34.325 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-05 01:00:34.325 1081 ERROR keystone result.read()", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-05 01:00:34.325 1081 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-05 01:00:34.325 1081 ERROR keystone packet.raise_for_error()", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-05 01:00:34.325 1081 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-05 01:00:34.325 1081 ERROR keystone raise errorclass(errno, errval)", "2026-01-05 01:00:34.325 1081 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-05 01:00:34.325 1081 ERROR keystone ", "2026-01-05 01:00:34.325 1081 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-05 01:00:34.325 1081 ERROR keystone ", "2026-01-05 01:00:34.325 1081 ERROR keystone Traceback (most recent call last):", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-05 01:00:34.325 1081 ERROR keystone sys.exit(main())", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-05 01:00:34.325 1081 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1727, in main", "2026-01-05 01:00:34.325 1081 ERROR keystone CONF.command.cmd_class.main()", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 492, in main", "2026-01-05 01:00:34.325 1081 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 321, in offline_sync_database_to_version", "2026-01-05 01:00:34.325 1081 ERROR keystone _db_sync(engine=engine)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 210, in _db_sync", "2026-01-05 01:00:34.325 1081 ERROR keystone with sql.session_for_write() as session:", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-05 01:00:34.325 1081 ERROR keystone return next(self.gen)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1199, in _transaction_scope", "2026-01-05 01:00:34.325 1081 ERROR keystone with current._produce_block(", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-05 01:00:34.325 1081 ERROR keystone return next(self.gen)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 841, in _session", "2026-01-05 01:00:34.325 1081 ERROR keystone self.session = self.factory._create_session(", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 459, in _create_session", "2026-01-05 01:00:34.325 1081 ERROR keystone self._start()", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 530, in _start", "2026-01-05 01:00:34.325 1081 ERROR keystone self._setup_for_connection(", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 647, in _setup_for_connection", "2026-01-05 01:00:34.325 1081 ERROR keystone engine = engines.create_engine(", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-05 01:00:34.325 1081 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 271, in create_engine", "2026-01-05 01:00:34.325 1081 ERROR keystone _test_connection(engine_event_target, max_retries, retry_interval)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 169, in _test_connection", "2026-01-05 01:00:34.325 1081 ERROR keystone conn = engine.connect()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3274, in connect", "2026-01-05 01:00:34.325 1081 ERROR keystone return self._connection_cls(self)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-05 01:00:34.325 1081 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2436, in _handle_dbapi_exception_noconnection", "2026-01-05 01:00:34.325 1081 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-05 01:00:34.325 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection", "2026-01-05 01:00:34.325 1081 ERROR keystone return self.pool.connect()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-05 01:00:34.325 1081 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-05 01:00:34.325 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-05 01:00:34.325 1081 ERROR keystone rec = pool._do_get()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-05 01:00:34.325 1081 ERROR keystone with util.safe_reraise():", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-05 01:00:34.325 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-05 01:00:34.325 1081 ERROR keystone return self._create_connection()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-05 01:00:34.325 1081 ERROR keystone return _ConnectionRecord(self)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-05 01:00:34.325 1081 ERROR keystone self.__connect()", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-05 01:00:34.325 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-05 01:00:34.325 1081 ERROR keystone self(*args, **kw)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-05 01:00:34.325 1081 ERROR keystone fn(*args, **kw)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go", "2026-01-05 01:00:34.325 1081 ERROR keystone return once_fn(*arg, **kw)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect", "2026-01-05 01:00:34.325 1081 ERROR keystone dialect.initialize(c)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize", "2026-01-05 01:00:34.325 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize", "2026-01-05 01:00:34.325 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level", "2026-01-05 01:00:34.325 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level", "2026-01-05 01:00:34.325 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-05 01:00:34.325 1081 ERROR keystone result = self._query(query)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-05 01:00:34.325 1081 ERROR keystone conn.query(q)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-05 01:00:34.325 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-05 01:00:34.325 1081 ERROR keystone result.read()", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-05 01:00:34.325 1081 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-05 01:00:34.325 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-05 01:00:34.325 1081 ERROR keystone packet.raise_for_error()", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-05 01:00:34.325 1081 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-05 01:00:34.325 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-05 01:00:34.325 1081 ERROR keystone raise errorclass(errno, errval)", "2026-01-05 01:00:34.325 1081 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-05 01:00:34.325 1081 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-05 01:00:34.325 1081 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-05 01:00:36.969081 | orchestrator | 2026-01-05 01:00:36.969088 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:00:36.969095 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-01-05 01:00:36.969107 | orchestrator | testbed-node-1 : ok=18  changed=10  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:00:36.969115 | orchestrator | testbed-node-2 : ok=18  changed=10  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:00:36.969122 | orchestrator | 2026-01-05 01:00:36.969129 | orchestrator | 2026-01-05 01:00:36.969136 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:00:36.969143 | orchestrator | Monday 05 January 2026 01:00:35 +0000 (0:00:11.018) 0:01:05.920 ******** 2026-01-05 01:00:36.969150 | orchestrator | =============================================================================== 2026-01-05 01:00:36.969156 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 11.02s 2026-01-05 01:00:36.969163 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.89s 2026-01-05 01:00:36.969170 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.29s 2026-01-05 01:00:36.969177 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.70s 2026-01-05 01:00:36.969184 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.43s 2026-01-05 01:00:36.969191 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.82s 2026-01-05 01:00:36.969198 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.70s 2026-01-05 01:00:36.969205 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2026-01-05 01:00:36.969212 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.39s 2026-01-05 01:00:36.969219 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.01s 2026-01-05 01:00:36.969226 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.89s 2026-01-05 01:00:36.969232 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.68s 2026-01-05 01:00:36.969240 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.44s 2026-01-05 01:00:36.969247 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 1.20s 2026-01-05 01:00:36.969254 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 1.17s 2026-01-05 01:00:36.969265 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 1.07s 2026-01-05 01:00:36.969272 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.03s 2026-01-05 01:00:36.969279 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.98s 2026-01-05 01:00:36.969286 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.88s 2026-01-05 01:00:36.969293 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.74s 2026-01-05 01:00:36.969300 | orchestrator | 2026-01-05 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:40.029188 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:00:40.031683 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:40.035309 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:40.036871 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:00:40.040544 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:00:40.040599 | orchestrator | 2026-01-05 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:43.090567 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:00:43.092755 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:43.093424 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:43.094446 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:00:43.096148 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:00:43.096191 | orchestrator | 2026-01-05 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:46.143392 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:00:46.146421 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:46.148911 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:46.150771 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:00:46.152744 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:00:46.152805 | orchestrator | 2026-01-05 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:49.200901 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:00:49.203006 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:49.205302 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:49.206836 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:00:49.208395 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:00:49.208428 | orchestrator | 2026-01-05 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:52.247890 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:00:52.250620 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:52.251662 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:52.253066 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:00:52.254491 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:00:52.254594 | orchestrator | 2026-01-05 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:55.306559 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:00:55.308462 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:55.310706 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:55.313150 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:00:55.315043 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:00:55.315102 | orchestrator | 2026-01-05 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:58.369082 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:00:58.370907 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:00:58.373112 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:00:58.375107 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:00:58.377141 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:00:58.377382 | orchestrator | 2026-01-05 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:01.424250 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:01.426474 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:01:01.427656 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:01:01.429060 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:01.430579 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:01.430616 | orchestrator | 2026-01-05 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:04.476966 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:04.478971 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:01:04.481536 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:01:04.483157 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:04.484606 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:04.485668 | orchestrator | 2026-01-05 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:07.531250 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:07.532270 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:01:07.533639 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:01:07.534867 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:07.536087 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:07.536116 | orchestrator | 2026-01-05 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:10.586393 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:10.588494 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:01:10.591332 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state STARTED 2026-01-05 01:01:10.593602 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:10.595899 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:10.595995 | orchestrator | 2026-01-05 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:13.629579 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:13.631412 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state STARTED 2026-01-05 01:01:13.635005 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task c49c09fc-9141-4cb5-858d-1a7ec94caead is in state SUCCESS 2026-01-05 01:01:13.637574 | orchestrator | 2026-01-05 01:01:13.637645 | orchestrator | 2026-01-05 01:01:13.637657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:01:13.637668 | orchestrator | 2026-01-05 01:01:13.637677 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:01:13.637687 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:00.281) 0:00:00.281 ******** 2026-01-05 01:01:13.637696 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.637706 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.637715 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.637724 | orchestrator | 2026-01-05 01:01:13.637733 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:01:13.637742 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:00.314) 0:00:00.596 ******** 2026-01-05 01:01:13.637751 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-05 01:01:13.637760 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-05 01:01:13.637769 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-05 01:01:13.637777 | orchestrator | 2026-01-05 01:01:13.637786 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-05 01:01:13.637795 | orchestrator | 2026-01-05 01:01:13.637804 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:01:13.637853 | orchestrator | Monday 05 January 2026 00:59:30 +0000 (0:00:00.438) 0:00:01.034 ******** 2026-01-05 01:01:13.637865 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:13.637896 | orchestrator | 2026-01-05 01:01:13.637905 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-05 01:01:13.637914 | orchestrator | Monday 05 January 2026 00:59:30 +0000 (0:00:00.533) 0:00:01.568 ******** 2026-01-05 01:01:13.637930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.637968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.637988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.637999 | orchestrator | 2026-01-05 01:01:13.638008 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-05 01:01:13.638065 | orchestrator | Monday 05 January 2026 00:59:31 +0000 (0:00:01.219) 0:00:02.787 ******** 2026-01-05 01:01:13.638075 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.638084 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.638093 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.638102 | orchestrator | 2026-01-05 01:01:13.638111 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:01:13.638129 | orchestrator | Monday 05 January 2026 00:59:32 +0000 (0:00:00.545) 0:00:03.332 ******** 2026-01-05 01:01:13.638138 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:01:13.638147 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:01:13.638156 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:01:13.638166 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:01:13.638177 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:01:13.638194 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:01:13.638204 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:01:13.638214 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:01:13.638224 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:01:13.638240 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:01:13.638251 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:01:13.638261 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:01:13.638272 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:01:13.638282 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:01:13.638292 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:01:13.638329 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:01:13.638340 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:01:13.638350 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:01:13.638360 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:01:13.638370 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:01:13.638381 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:01:13.638391 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:01:13.638402 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:01:13.638412 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:01:13.638424 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-05 01:01:13.638437 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-05 01:01:13.638448 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-05 01:01:13.638459 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-05 01:01:13.638469 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-05 01:01:13.638479 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-05 01:01:13.638490 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-05 01:01:13.638499 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-05 01:01:13.638509 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-05 01:01:13.638521 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-05 01:01:13.638538 | orchestrator | 2026-01-05 01:01:13.638548 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.638557 | orchestrator | Monday 05 January 2026 00:59:33 +0000 (0:00:00.779) 0:00:04.112 ******** 2026-01-05 01:01:13.638565 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.638574 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.638583 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.638591 | orchestrator | 2026-01-05 01:01:13.638605 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.638614 | orchestrator | Monday 05 January 2026 00:59:33 +0000 (0:00:00.334) 0:00:04.446 ******** 2026-01-05 01:01:13.638623 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.638632 | orchestrator | 2026-01-05 01:01:13.638641 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.638650 | orchestrator | Monday 05 January 2026 00:59:33 +0000 (0:00:00.132) 0:00:04.579 ******** 2026-01-05 01:01:13.638659 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.638667 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.638676 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.638685 | orchestrator | 2026-01-05 01:01:13.638694 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.638702 | orchestrator | Monday 05 January 2026 00:59:34 +0000 (0:00:00.507) 0:00:05.086 ******** 2026-01-05 01:01:13.638711 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.638720 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.638729 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.638737 | orchestrator | 2026-01-05 01:01:13.638746 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.638755 | orchestrator | Monday 05 January 2026 00:59:34 +0000 (0:00:00.321) 0:00:05.408 ******** 2026-01-05 01:01:13.638764 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.638772 | orchestrator | 2026-01-05 01:01:13.638786 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.638795 | orchestrator | Monday 05 January 2026 00:59:34 +0000 (0:00:00.133) 0:00:05.542 ******** 2026-01-05 01:01:13.638804 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.638813 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.638885 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.638895 | orchestrator | 2026-01-05 01:01:13.638904 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.638912 | orchestrator | Monday 05 January 2026 00:59:35 +0000 (0:00:00.296) 0:00:05.838 ******** 2026-01-05 01:01:13.638921 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.638930 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.638939 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.638947 | orchestrator | 2026-01-05 01:01:13.638956 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.638965 | orchestrator | Monday 05 January 2026 00:59:35 +0000 (0:00:00.366) 0:00:06.205 ******** 2026-01-05 01:01:13.638974 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.638982 | orchestrator | 2026-01-05 01:01:13.638991 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.639000 | orchestrator | Monday 05 January 2026 00:59:35 +0000 (0:00:00.353) 0:00:06.558 ******** 2026-01-05 01:01:13.639008 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639022 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.639037 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.639050 | orchestrator | 2026-01-05 01:01:13.639065 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.639080 | orchestrator | Monday 05 January 2026 00:59:36 +0000 (0:00:00.367) 0:00:06.926 ******** 2026-01-05 01:01:13.639094 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.639109 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.639133 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.639143 | orchestrator | 2026-01-05 01:01:13.639152 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.639161 | orchestrator | Monday 05 January 2026 00:59:36 +0000 (0:00:00.359) 0:00:07.286 ******** 2026-01-05 01:01:13.639169 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639178 | orchestrator | 2026-01-05 01:01:13.639187 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.639196 | orchestrator | Monday 05 January 2026 00:59:36 +0000 (0:00:00.153) 0:00:07.439 ******** 2026-01-05 01:01:13.639205 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639213 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.639222 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.639231 | orchestrator | 2026-01-05 01:01:13.639240 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.639249 | orchestrator | Monday 05 January 2026 00:59:36 +0000 (0:00:00.306) 0:00:07.745 ******** 2026-01-05 01:01:13.639257 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.639266 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.639275 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.639284 | orchestrator | 2026-01-05 01:01:13.639292 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.639301 | orchestrator | Monday 05 January 2026 00:59:37 +0000 (0:00:00.557) 0:00:08.302 ******** 2026-01-05 01:01:13.639310 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639319 | orchestrator | 2026-01-05 01:01:13.639328 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.639337 | orchestrator | Monday 05 January 2026 00:59:37 +0000 (0:00:00.145) 0:00:08.448 ******** 2026-01-05 01:01:13.639346 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639355 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.639364 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.639372 | orchestrator | 2026-01-05 01:01:13.639394 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.639403 | orchestrator | Monday 05 January 2026 00:59:37 +0000 (0:00:00.317) 0:00:08.765 ******** 2026-01-05 01:01:13.639411 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.639419 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.639435 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.639443 | orchestrator | 2026-01-05 01:01:13.639451 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.639459 | orchestrator | Monday 05 January 2026 00:59:38 +0000 (0:00:00.375) 0:00:09.140 ******** 2026-01-05 01:01:13.639467 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639475 | orchestrator | 2026-01-05 01:01:13.639483 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.639491 | orchestrator | Monday 05 January 2026 00:59:38 +0000 (0:00:00.138) 0:00:09.279 ******** 2026-01-05 01:01:13.639499 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639507 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.639523 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.639531 | orchestrator | 2026-01-05 01:01:13.639539 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.639547 | orchestrator | Monday 05 January 2026 00:59:38 +0000 (0:00:00.335) 0:00:09.614 ******** 2026-01-05 01:01:13.639559 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.639576 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.639594 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.639608 | orchestrator | 2026-01-05 01:01:13.639621 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.639635 | orchestrator | Monday 05 January 2026 00:59:39 +0000 (0:00:00.609) 0:00:10.223 ******** 2026-01-05 01:01:13.639649 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639663 | orchestrator | 2026-01-05 01:01:13.639676 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.639699 | orchestrator | Monday 05 January 2026 00:59:39 +0000 (0:00:00.261) 0:00:10.484 ******** 2026-01-05 01:01:13.639713 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639728 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.639743 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.639758 | orchestrator | 2026-01-05 01:01:13.639774 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.639784 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.355) 0:00:10.840 ******** 2026-01-05 01:01:13.639798 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.639806 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.639814 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.639844 | orchestrator | 2026-01-05 01:01:13.639852 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.639860 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.391) 0:00:11.231 ******** 2026-01-05 01:01:13.639868 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639876 | orchestrator | 2026-01-05 01:01:13.639884 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.639892 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.194) 0:00:11.426 ******** 2026-01-05 01:01:13.639900 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.639908 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.639916 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.639924 | orchestrator | 2026-01-05 01:01:13.639932 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.639940 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.329) 0:00:11.756 ******** 2026-01-05 01:01:13.639947 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.639955 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.639963 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.639971 | orchestrator | 2026-01-05 01:01:13.639979 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.639986 | orchestrator | Monday 05 January 2026 00:59:41 +0000 (0:00:00.567) 0:00:12.323 ******** 2026-01-05 01:01:13.639994 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.640007 | orchestrator | 2026-01-05 01:01:13.640021 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.640040 | orchestrator | Monday 05 January 2026 00:59:41 +0000 (0:00:00.151) 0:00:12.474 ******** 2026-01-05 01:01:13.640053 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.640066 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.640079 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.640092 | orchestrator | 2026-01-05 01:01:13.640105 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:01:13.640118 | orchestrator | Monday 05 January 2026 00:59:41 +0000 (0:00:00.308) 0:00:12.783 ******** 2026-01-05 01:01:13.640130 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:13.640144 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:13.640157 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:13.640171 | orchestrator | 2026-01-05 01:01:13.640186 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:01:13.640199 | orchestrator | Monday 05 January 2026 00:59:42 +0000 (0:00:00.329) 0:00:13.112 ******** 2026-01-05 01:01:13.640213 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.640227 | orchestrator | 2026-01-05 01:01:13.640236 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:01:13.640244 | orchestrator | Monday 05 January 2026 00:59:42 +0000 (0:00:00.147) 0:00:13.260 ******** 2026-01-05 01:01:13.640252 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.640260 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.640268 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.640276 | orchestrator | 2026-01-05 01:01:13.640284 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-05 01:01:13.640305 | orchestrator | Monday 05 January 2026 00:59:42 +0000 (0:00:00.529) 0:00:13.789 ******** 2026-01-05 01:01:13.640313 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:13.640321 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:13.640328 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:13.640336 | orchestrator | 2026-01-05 01:01:13.640344 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-05 01:01:13.640352 | orchestrator | Monday 05 January 2026 00:59:44 +0000 (0:00:01.783) 0:00:15.572 ******** 2026-01-05 01:01:13.640361 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:01:13.640369 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:01:13.640377 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:01:13.640385 | orchestrator | 2026-01-05 01:01:13.640393 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-05 01:01:13.640401 | orchestrator | Monday 05 January 2026 00:59:46 +0000 (0:00:02.205) 0:00:17.778 ******** 2026-01-05 01:01:13.640409 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:01:13.640418 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:01:13.640437 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:01:13.640451 | orchestrator | 2026-01-05 01:01:13.640463 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-05 01:01:13.640476 | orchestrator | Monday 05 January 2026 00:59:50 +0000 (0:00:03.058) 0:00:20.837 ******** 2026-01-05 01:01:13.640491 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:01:13.640504 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:01:13.640519 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:01:13.640527 | orchestrator | 2026-01-05 01:01:13.640535 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-05 01:01:13.640543 | orchestrator | Monday 05 January 2026 00:59:52 +0000 (0:00:02.306) 0:00:23.143 ******** 2026-01-05 01:01:13.640551 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.640560 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.640568 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.640576 | orchestrator | 2026-01-05 01:01:13.640584 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-05 01:01:13.640592 | orchestrator | Monday 05 January 2026 00:59:52 +0000 (0:00:00.324) 0:00:23.468 ******** 2026-01-05 01:01:13.640600 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.640609 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.640617 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.640625 | orchestrator | 2026-01-05 01:01:13.640634 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:01:13.640642 | orchestrator | Monday 05 January 2026 00:59:52 +0000 (0:00:00.299) 0:00:23.767 ******** 2026-01-05 01:01:13.640650 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:13.640658 | orchestrator | 2026-01-05 01:01:13.640666 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-05 01:01:13.640675 | orchestrator | Monday 05 January 2026 00:59:53 +0000 (0:00:00.953) 0:00:24.721 ******** 2026-01-05 01:01:13.640730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.640786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.640849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.640870 | orchestrator | 2026-01-05 01:01:13.640883 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-05 01:01:13.640896 | orchestrator | Monday 05 January 2026 00:59:55 +0000 (0:00:01.948) 0:00:26.670 ******** 2026-01-05 01:01:13.640918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.640941 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.640968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.640983 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.641001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.641016 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.641024 | orchestrator | 2026-01-05 01:01:13.641032 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-05 01:01:13.641040 | orchestrator | Monday 05 January 2026 00:59:56 +0000 (0:00:00.728) 0:00:27.399 ******** 2026-01-05 01:01:13.641060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.641070 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.641079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.641094 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.641116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.641132 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.641140 | orchestrator | 2026-01-05 01:01:13.641152 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-01-05 01:01:13.641165 | orchestrator | Monday 05 January 2026 00:59:57 +0000 (0:00:00.922) 0:00:28.321 ******** 2026-01-05 01:01:13.641182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.641222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.641605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:01:13.641621 | orchestrator | 2026-01-05 01:01:13.641634 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-01-05 01:01:13.641646 | orchestrator | Monday 05 January 2026 00:59:59 +0000 (0:00:02.140) 0:00:30.461 ******** 2026-01-05 01:01:13.641659 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 01:01:13.641672 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:01:13.641685 | orchestrator | } 2026-01-05 01:01:13.641699 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 01:01:13.641713 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:01:13.641727 | orchestrator | } 2026-01-05 01:01:13.641741 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 01:01:13.641754 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:01:13.641768 | orchestrator | } 2026-01-05 01:01:13.641783 | orchestrator | 2026-01-05 01:01:13.641803 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 01:01:13.641865 | orchestrator | Monday 05 January 2026 00:59:59 +0000 (0:00:00.343) 0:00:30.804 ******** 2026-01-05 01:01:13.641892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.641909 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.641943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.641976 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.641991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:01:13.642005 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.642110 | orchestrator | 2026-01-05 01:01:13.642127 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:01:13.642143 | orchestrator | Monday 05 January 2026 01:00:01 +0000 (0:00:01.277) 0:00:32.082 ******** 2026-01-05 01:01:13.642157 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:13.642172 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:13.642188 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:13.642202 | orchestrator | 2026-01-05 01:01:13.642215 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:01:13.642230 | orchestrator | Monday 05 January 2026 01:00:01 +0000 (0:00:00.512) 0:00:32.595 ******** 2026-01-05 01:01:13.642244 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:13.642258 | orchestrator | 2026-01-05 01:01:13.642284 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-05 01:01:13.642301 | orchestrator | Monday 05 January 2026 01:00:02 +0000 (0:00:00.553) 0:00:33.148 ******** 2026-01-05 01:01:13.642328 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:13.642343 | orchestrator | 2026-01-05 01:01:13.642358 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-05 01:01:13.642372 | orchestrator | Monday 05 January 2026 01:00:05 +0000 (0:00:02.970) 0:00:36.119 ******** 2026-01-05 01:01:13.642386 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:13.642400 | orchestrator | 2026-01-05 01:01:13.642413 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-05 01:01:13.642422 | orchestrator | Monday 05 January 2026 01:00:07 +0000 (0:00:02.639) 0:00:38.759 ******** 2026-01-05 01:01:13.642430 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:13.642439 | orchestrator | 2026-01-05 01:01:13.642446 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:01:13.642455 | orchestrator | Monday 05 January 2026 01:00:24 +0000 (0:00:16.800) 0:00:55.560 ******** 2026-01-05 01:01:13.642462 | orchestrator | 2026-01-05 01:01:13.642470 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:01:13.642478 | orchestrator | Monday 05 January 2026 01:00:24 +0000 (0:00:00.065) 0:00:55.625 ******** 2026-01-05 01:01:13.642486 | orchestrator | 2026-01-05 01:01:13.642500 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:01:13.642509 | orchestrator | Monday 05 January 2026 01:00:25 +0000 (0:00:00.255) 0:00:55.881 ******** 2026-01-05 01:01:13.642516 | orchestrator | 2026-01-05 01:01:13.642524 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-05 01:01:13.642532 | orchestrator | Monday 05 January 2026 01:00:25 +0000 (0:00:00.067) 0:00:55.949 ******** 2026-01-05 01:01:13.642540 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:13.642548 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:13.642556 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:13.642563 | orchestrator | 2026-01-05 01:01:13.642571 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:13.642581 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-01-05 01:01:13.642590 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-05 01:01:13.642598 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-05 01:01:13.642605 | orchestrator | 2026-01-05 01:01:13.642614 | orchestrator | 2026-01-05 01:01:13.642621 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:13.642629 | orchestrator | Monday 05 January 2026 01:01:11 +0000 (0:00:46.529) 0:01:42.479 ******** 2026-01-05 01:01:13.642637 | orchestrator | =============================================================================== 2026-01-05 01:01:13.642645 | orchestrator | horizon : Restart horizon container ------------------------------------ 46.53s 2026-01-05 01:01:13.642653 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.80s 2026-01-05 01:01:13.642661 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.06s 2026-01-05 01:01:13.642669 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.97s 2026-01-05 01:01:13.642677 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.64s 2026-01-05 01:01:13.642685 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.31s 2026-01-05 01:01:13.642693 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.21s 2026-01-05 01:01:13.642701 | orchestrator | service-check-containers : horizon | Check containers ------------------- 2.14s 2026-01-05 01:01:13.642709 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.95s 2026-01-05 01:01:13.642723 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.78s 2026-01-05 01:01:13.642740 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.28s 2026-01-05 01:01:13.642770 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.22s 2026-01-05 01:01:13.642784 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.95s 2026-01-05 01:01:13.642796 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.92s 2026-01-05 01:01:13.642808 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2026-01-05 01:01:13.642852 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.73s 2026-01-05 01:01:13.642864 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2026-01-05 01:01:13.642875 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-01-05 01:01:13.642887 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-01-05 01:01:13.642899 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-01-05 01:01:13.642911 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:13.642924 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:13.642935 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:13.642961 | orchestrator | 2026-01-05 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:16.697441 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:16.700973 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task df8e3fff-3349-43fc-b7df-b37eec1f4fc6 is in state SUCCESS 2026-01-05 01:01:16.702764 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:16.705068 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:16.706632 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:16.708415 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:16.708459 | orchestrator | 2026-01-05 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:19.755773 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:19.758604 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:19.760557 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:19.761886 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:19.763646 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:19.763695 | orchestrator | 2026-01-05 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:22.800982 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:22.802942 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:22.805098 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:22.806797 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:22.808716 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:22.808766 | orchestrator | 2026-01-05 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:25.849928 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:25.852697 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:25.855085 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:25.857249 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:25.859208 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:25.859256 | orchestrator | 2026-01-05 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:28.902851 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:28.904352 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:28.906113 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:28.907896 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:28.909347 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:28.909381 | orchestrator | 2026-01-05 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:31.958606 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:31.960314 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:31.961620 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:31.964382 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:31.967243 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:31.967296 | orchestrator | 2026-01-05 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:35.015082 | orchestrator | 2026-01-05 01:01:35 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:35.016712 | orchestrator | 2026-01-05 01:01:35 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:35.020486 | orchestrator | 2026-01-05 01:01:35 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:35.022166 | orchestrator | 2026-01-05 01:01:35 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:35.024238 | orchestrator | 2026-01-05 01:01:35 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:35.024286 | orchestrator | 2026-01-05 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:38.067835 | orchestrator | 2026-01-05 01:01:38 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:38.070934 | orchestrator | 2026-01-05 01:01:38 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:38.073078 | orchestrator | 2026-01-05 01:01:38 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:38.074473 | orchestrator | 2026-01-05 01:01:38 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:38.077718 | orchestrator | 2026-01-05 01:01:38 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:38.077830 | orchestrator | 2026-01-05 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:41.122672 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:41.125583 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:41.127695 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:41.129682 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:41.131037 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:41.131096 | orchestrator | 2026-01-05 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:44.183379 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:44.184996 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:44.186721 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:44.188144 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:44.189820 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:44.189868 | orchestrator | 2026-01-05 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:47.242989 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:47.245848 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state STARTED 2026-01-05 01:01:47.248158 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:47.250105 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:47.252067 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:47.252106 | orchestrator | 2026-01-05 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:50.293313 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:50.293758 | orchestrator | 2026-01-05 01:01:50.293811 | orchestrator | 2026-01-05 01:01:50.293817 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-05 01:01:50.293821 | orchestrator | 2026-01-05 01:01:50.293826 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-05 01:01:50.293830 | orchestrator | Monday 05 January 2026 01:00:22 +0000 (0:00:00.237) 0:00:00.237 ******** 2026-01-05 01:01:50.293834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-05 01:01:50.293841 | orchestrator | 2026-01-05 01:01:50.293845 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-05 01:01:50.293849 | orchestrator | Monday 05 January 2026 01:00:23 +0000 (0:00:00.235) 0:00:00.473 ******** 2026-01-05 01:01:50.293880 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-05 01:01:50.293885 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-05 01:01:50.293889 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-05 01:01:50.293893 | orchestrator | 2026-01-05 01:01:50.293897 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-05 01:01:50.293901 | orchestrator | Monday 05 January 2026 01:00:24 +0000 (0:00:01.351) 0:00:01.825 ******** 2026-01-05 01:01:50.293906 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-05 01:01:50.293910 | orchestrator | 2026-01-05 01:01:50.293913 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-05 01:01:50.293928 | orchestrator | Monday 05 January 2026 01:00:26 +0000 (0:00:01.502) 0:00:03.328 ******** 2026-01-05 01:01:50.293932 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:50.293936 | orchestrator | 2026-01-05 01:01:50.293940 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-05 01:01:50.293943 | orchestrator | Monday 05 January 2026 01:00:26 +0000 (0:00:00.831) 0:00:04.159 ******** 2026-01-05 01:01:50.293947 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:50.293951 | orchestrator | 2026-01-05 01:01:50.293955 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-05 01:01:50.293958 | orchestrator | Monday 05 January 2026 01:00:27 +0000 (0:00:00.913) 0:00:05.073 ******** 2026-01-05 01:01:50.293962 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-05 01:01:50.293966 | orchestrator | ok: [testbed-manager] 2026-01-05 01:01:50.293970 | orchestrator | 2026-01-05 01:01:50.293974 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-05 01:01:50.293978 | orchestrator | Monday 05 January 2026 01:01:05 +0000 (0:00:37.240) 0:00:42.313 ******** 2026-01-05 01:01:50.293982 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-05 01:01:50.293986 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-05 01:01:50.293990 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-05 01:01:50.293993 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-05 01:01:50.293997 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-05 01:01:50.294001 | orchestrator | 2026-01-05 01:01:50.294004 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-05 01:01:50.294008 | orchestrator | Monday 05 January 2026 01:01:09 +0000 (0:00:04.207) 0:00:46.520 ******** 2026-01-05 01:01:50.294044 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-05 01:01:50.294048 | orchestrator | 2026-01-05 01:01:50.294052 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-05 01:01:50.294056 | orchestrator | Monday 05 January 2026 01:01:09 +0000 (0:00:00.451) 0:00:46.972 ******** 2026-01-05 01:01:50.294060 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:01:50.294063 | orchestrator | 2026-01-05 01:01:50.294067 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-05 01:01:50.294071 | orchestrator | Monday 05 January 2026 01:01:09 +0000 (0:00:00.130) 0:00:47.103 ******** 2026-01-05 01:01:50.294075 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:01:50.294078 | orchestrator | 2026-01-05 01:01:50.294082 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-05 01:01:50.294086 | orchestrator | Monday 05 January 2026 01:01:10 +0000 (0:00:00.581) 0:00:47.684 ******** 2026-01-05 01:01:50.294090 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:50.294093 | orchestrator | 2026-01-05 01:01:50.294097 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-05 01:01:50.294101 | orchestrator | Monday 05 January 2026 01:01:11 +0000 (0:00:01.466) 0:00:49.151 ******** 2026-01-05 01:01:50.294105 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:50.294114 | orchestrator | 2026-01-05 01:01:50.294118 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-05 01:01:50.294122 | orchestrator | Monday 05 January 2026 01:01:12 +0000 (0:00:00.718) 0:00:49.870 ******** 2026-01-05 01:01:50.294125 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:50.294129 | orchestrator | 2026-01-05 01:01:50.294133 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-05 01:01:50.294136 | orchestrator | Monday 05 January 2026 01:01:13 +0000 (0:00:00.534) 0:00:50.404 ******** 2026-01-05 01:01:50.294140 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-05 01:01:50.294144 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-05 01:01:50.294148 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-05 01:01:50.294152 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-05 01:01:50.294155 | orchestrator | 2026-01-05 01:01:50.294159 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:50.294163 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:01:50.294168 | orchestrator | 2026-01-05 01:01:50.294171 | orchestrator | 2026-01-05 01:01:50.294183 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:50.294187 | orchestrator | Monday 05 January 2026 01:01:14 +0000 (0:00:01.384) 0:00:51.789 ******** 2026-01-05 01:01:50.294190 | orchestrator | =============================================================================== 2026-01-05 01:01:50.294194 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.24s 2026-01-05 01:01:50.294200 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.21s 2026-01-05 01:01:50.294206 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.50s 2026-01-05 01:01:50.294212 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.47s 2026-01-05 01:01:50.294217 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.38s 2026-01-05 01:01:50.294232 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.35s 2026-01-05 01:01:50.294238 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2026-01-05 01:01:50.294244 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.83s 2026-01-05 01:01:50.294250 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2026-01-05 01:01:50.294255 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.58s 2026-01-05 01:01:50.294261 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2026-01-05 01:01:50.294266 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2026-01-05 01:01:50.294277 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-01-05 01:01:50.294287 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-01-05 01:01:50.294292 | orchestrator | 2026-01-05 01:01:50.294297 | orchestrator | 2026-01-05 01:01:50.294303 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:01:50.294309 | orchestrator | 2026-01-05 01:01:50.294315 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:01:50.294322 | orchestrator | Monday 05 January 2026 01:00:40 +0000 (0:00:00.302) 0:00:00.302 ******** 2026-01-05 01:01:50.294328 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:50.294334 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:50.294340 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:50.294346 | orchestrator | 2026-01-05 01:01:50.294351 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:01:50.294357 | orchestrator | Monday 05 January 2026 01:00:41 +0000 (0:00:00.391) 0:00:00.693 ******** 2026-01-05 01:01:50.294363 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-05 01:01:50.294376 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-05 01:01:50.294390 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-05 01:01:50.294395 | orchestrator | 2026-01-05 01:01:50.294400 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-05 01:01:50.294405 | orchestrator | 2026-01-05 01:01:50.294409 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 01:01:50.294414 | orchestrator | Monday 05 January 2026 01:00:41 +0000 (0:00:00.541) 0:00:01.235 ******** 2026-01-05 01:01:50.294419 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:50.294424 | orchestrator | 2026-01-05 01:01:50.294428 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-01-05 01:01:50.294433 | orchestrator | Monday 05 January 2026 01:00:42 +0000 (0:00:00.734) 0:00:01.969 ******** 2026-01-05 01:01:50.294437 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-01-05 01:01:50.294442 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-01-05 01:01:50.294447 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-01-05 01:01:50.294451 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-01-05 01:01:50.294456 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-01-05 01:01:50.294487 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767574907.8885496-3363-85116251257609/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767574907.8885496-3363-85116251257609/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767574907.8885496-3363-85116251257609/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_qplve23n/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_qplve23n/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_qplve23n/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_qplve23n/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_qplve23n/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:01:50.294498 | orchestrator | 2026-01-05 01:01:50.294503 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:50.294512 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-05 01:01:50.294516 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:50.294522 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:50.294526 | orchestrator | 2026-01-05 01:01:50.294531 | orchestrator | 2026-01-05 01:01:50.294535 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:50.294540 | orchestrator | Monday 05 January 2026 01:01:49 +0000 (0:01:06.853) 0:01:08.823 ******** 2026-01-05 01:01:50.294544 | orchestrator | =============================================================================== 2026-01-05 01:01:50.294551 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 66.85s 2026-01-05 01:01:50.294557 | orchestrator | designate : include_tasks ----------------------------------------------- 0.73s 2026-01-05 01:01:50.294564 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-01-05 01:01:50.294570 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-01-05 01:01:50.294584 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task bf6c850a-727b-4f15-8b50-834627ad8612 is in state SUCCESS 2026-01-05 01:01:50.294591 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:50.294915 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state STARTED 2026-01-05 01:01:50.296017 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:50.296068 | orchestrator | 2026-01-05 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:53.342160 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state STARTED 2026-01-05 01:01:53.342938 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:01:53.345696 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:53.347787 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task 73f9c97f-8aa4-4ae3-99ef-9b80f42b0444 is in state SUCCESS 2026-01-05 01:01:53.348359 | orchestrator | 2026-01-05 01:01:53.348404 | orchestrator | 2026-01-05 01:01:53.348410 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:01:53.348416 | orchestrator | 2026-01-05 01:01:53.348420 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:01:53.348425 | orchestrator | Monday 05 January 2026 01:00:41 +0000 (0:00:00.490) 0:00:00.490 ******** 2026-01-05 01:01:53.348429 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:53.348434 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:53.348438 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:53.348443 | orchestrator | 2026-01-05 01:01:53.348447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:01:53.348451 | orchestrator | Monday 05 January 2026 01:00:41 +0000 (0:00:00.499) 0:00:00.989 ******** 2026-01-05 01:01:53.348455 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-05 01:01:53.348460 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-05 01:01:53.348464 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-05 01:01:53.348468 | orchestrator | 2026-01-05 01:01:53.348472 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-05 01:01:53.348476 | orchestrator | 2026-01-05 01:01:53.348479 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-05 01:01:53.348504 | orchestrator | Monday 05 January 2026 01:00:42 +0000 (0:00:00.631) 0:00:01.621 ******** 2026-01-05 01:01:53.348508 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:53.348514 | orchestrator | 2026-01-05 01:01:53.348518 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-01-05 01:01:53.348522 | orchestrator | Monday 05 January 2026 01:00:42 +0000 (0:00:00.718) 0:00:02.339 ******** 2026-01-05 01:01:53.348528 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-01-05 01:01:53.348535 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-01-05 01:01:53.348541 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-01-05 01:01:53.348547 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-01-05 01:01:53.348554 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-01-05 01:01:53.348604 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767574908.668046-3390-50776546347720/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767574908.668046-3390-50776546347720/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767574908.668046-3390-50776546347720/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_c_g3dwqy/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_c_g3dwqy/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_c_g3dwqy/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_c_g3dwqy/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_c_g3dwqy/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:01:53.348617 | orchestrator | 2026-01-05 01:01:53.348622 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:53.348626 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.348632 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.348640 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.348652 | orchestrator | 2026-01-05 01:01:53.348658 | orchestrator | 2026-01-05 01:01:53.348664 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:53.348669 | orchestrator | Monday 05 January 2026 01:01:50 +0000 (0:01:07.156) 0:01:09.496 ******** 2026-01-05 01:01:53.348676 | orchestrator | =============================================================================== 2026-01-05 01:01:53.348683 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 67.16s 2026-01-05 01:01:53.348689 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.72s 2026-01-05 01:01:53.348695 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-01-05 01:01:53.348702 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2026-01-05 01:01:53.353159 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:53.356476 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:01:53.357130 | orchestrator | 2026-01-05 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:56.395403 | orchestrator | 2026-01-05 01:01:56.395507 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task f6b6cfa1-9796-4ab6-85a3-a64b4d39e5d7 is in state SUCCESS 2026-01-05 01:01:56.396150 | orchestrator | 2026-01-05 01:01:56.396184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:01:56.396195 | orchestrator | 2026-01-05 01:01:56.396205 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:01:56.396215 | orchestrator | Monday 05 January 2026 01:00:41 +0000 (0:00:00.383) 0:00:00.383 ******** 2026-01-05 01:01:56.396224 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:56.396253 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:56.396262 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:56.396273 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:56.396287 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:56.396302 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:56.396316 | orchestrator | 2026-01-05 01:01:56.396331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:01:56.396346 | orchestrator | Monday 05 January 2026 01:00:42 +0000 (0:00:00.966) 0:00:01.349 ******** 2026-01-05 01:01:56.396361 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-05 01:01:56.396376 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-05 01:01:56.396391 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-05 01:01:56.396500 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-05 01:01:56.396516 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-05 01:01:56.396531 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-05 01:01:56.396546 | orchestrator | 2026-01-05 01:01:56.396561 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-05 01:01:56.396577 | orchestrator | 2026-01-05 01:01:56.396590 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:01:56.396604 | orchestrator | Monday 05 January 2026 01:00:43 +0000 (0:00:00.833) 0:00:02.182 ******** 2026-01-05 01:01:56.396615 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:56.396626 | orchestrator | 2026-01-05 01:01:56.396635 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-05 01:01:56.396644 | orchestrator | Monday 05 January 2026 01:00:44 +0000 (0:00:01.226) 0:00:03.409 ******** 2026-01-05 01:01:56.396653 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:56.396662 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:56.396670 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:56.396703 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:56.396712 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:56.396721 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:56.396729 | orchestrator | 2026-01-05 01:01:56.396737 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-05 01:01:56.396746 | orchestrator | Monday 05 January 2026 01:00:45 +0000 (0:00:01.664) 0:00:05.074 ******** 2026-01-05 01:01:56.396782 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:56.396801 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:56.396822 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:56.396839 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:56.396852 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:56.396865 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:56.396878 | orchestrator | 2026-01-05 01:01:56.396893 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-05 01:01:56.396907 | orchestrator | Monday 05 January 2026 01:00:47 +0000 (0:00:01.190) 0:00:06.265 ******** 2026-01-05 01:01:56.396922 | orchestrator | ok: [testbed-node-0] => { 2026-01-05 01:01:56.396940 | orchestrator |  "changed": false, 2026-01-05 01:01:56.396958 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:01:56.396973 | orchestrator | } 2026-01-05 01:01:56.396989 | orchestrator | ok: [testbed-node-1] => { 2026-01-05 01:01:56.396999 | orchestrator |  "changed": false, 2026-01-05 01:01:56.397011 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:01:56.397021 | orchestrator | } 2026-01-05 01:01:56.397032 | orchestrator | ok: [testbed-node-2] => { 2026-01-05 01:01:56.397042 | orchestrator |  "changed": false, 2026-01-05 01:01:56.397053 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:01:56.397063 | orchestrator | } 2026-01-05 01:01:56.397072 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:01:56.397080 | orchestrator |  "changed": false, 2026-01-05 01:01:56.397089 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:01:56.397098 | orchestrator | } 2026-01-05 01:01:56.397106 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:01:56.397115 | orchestrator |  "changed": false, 2026-01-05 01:01:56.397125 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:01:56.397139 | orchestrator | } 2026-01-05 01:01:56.397159 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:01:56.397178 | orchestrator |  "changed": false, 2026-01-05 01:01:56.397191 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:01:56.397204 | orchestrator | } 2026-01-05 01:01:56.397217 | orchestrator | 2026-01-05 01:01:56.397230 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-05 01:01:56.397243 | orchestrator | Monday 05 January 2026 01:00:47 +0000 (0:00:00.809) 0:00:07.074 ******** 2026-01-05 01:01:56.397256 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:56.397270 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:56.397285 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:56.397300 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:56.397314 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:56.397328 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:56.397349 | orchestrator | 2026-01-05 01:01:56.397366 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-01-05 01:01:56.397380 | orchestrator | Monday 05 January 2026 01:00:48 +0000 (0:00:00.613) 0:00:07.687 ******** 2026-01-05 01:01:56.397394 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-01-05 01:01:56.397410 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-01-05 01:01:56.397425 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-01-05 01:01:56.397440 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-01-05 01:01:56.397456 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-01-05 01:01:56.397546 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767574912.8143082-3428-166731361606884/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767574912.8143082-3428-166731361606884/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767574912.8143082-3428-166731361606884/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload__tw6xssi/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload__tw6xssi/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload__tw6xssi/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload__tw6xssi/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload__tw6xssi/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:01:56.397568 | orchestrator | 2026-01-05 01:01:56.397578 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:56.397587 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-01-05 01:01:56.397597 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:01:56.397605 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:01:56.397614 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:01:56.397623 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:01:56.397632 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:01:56.397640 | orchestrator | 2026-01-05 01:01:56.397649 | orchestrator | 2026-01-05 01:01:56.397658 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:56.397667 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:01:05.684) 0:01:13.372 ******** 2026-01-05 01:01:56.397676 | orchestrator | =============================================================================== 2026-01-05 01:01:56.397684 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 65.68s 2026-01-05 01:01:56.397699 | orchestrator | neutron : Get container facts ------------------------------------------- 1.66s 2026-01-05 01:01:56.397708 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.23s 2026-01-05 01:01:56.397716 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.19s 2026-01-05 01:01:56.397725 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.97s 2026-01-05 01:01:56.397734 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-01-05 01:01:56.397748 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.81s 2026-01-05 01:01:56.397786 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.61s 2026-01-05 01:01:56.399250 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:01:56.403818 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:56.406566 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:01:56.408601 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:56.413120 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:01:56.413244 | orchestrator | 2026-01-05 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:59.457115 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:01:59.459164 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:01:59.459883 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:01:59.460785 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:01:59.461680 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:01:59.461834 | orchestrator | 2026-01-05 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:02.500478 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:02.502656 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:02.504460 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:02.506118 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:02.507634 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:02.507669 | orchestrator | 2026-01-05 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:05.551611 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:05.552611 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:05.553952 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:05.555292 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:05.556659 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:05.556718 | orchestrator | 2026-01-05 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:08.603522 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:08.607459 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:08.609808 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:08.611522 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:08.613391 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:08.613573 | orchestrator | 2026-01-05 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:11.657350 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:11.659932 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:11.660422 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:11.661778 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:11.662477 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:11.662529 | orchestrator | 2026-01-05 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:14.708985 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:14.709764 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:14.711331 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:14.713072 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:14.714198 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:14.714255 | orchestrator | 2026-01-05 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:17.760283 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:17.761173 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:17.763153 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:17.764672 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:17.766536 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:17.766582 | orchestrator | 2026-01-05 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:20.811312 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:20.812935 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:20.814216 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:20.815689 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:20.816760 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:20.817266 | orchestrator | 2026-01-05 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:23.854667 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:23.856944 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:23.858514 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:23.860805 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:23.861824 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:23.861870 | orchestrator | 2026-01-05 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:26.906756 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:26.908958 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:26.911397 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:26.913292 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:26.915750 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:26.915801 | orchestrator | 2026-01-05 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:29.968504 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:29.970182 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:29.972313 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:29.974316 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:29.976027 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:29.976061 | orchestrator | 2026-01-05 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:33.024268 | orchestrator | 2026-01-05 01:02:33 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:33.025042 | orchestrator | 2026-01-05 01:02:33 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:33.026288 | orchestrator | 2026-01-05 01:02:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:33.027571 | orchestrator | 2026-01-05 01:02:33 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:33.028972 | orchestrator | 2026-01-05 01:02:33 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:33.029034 | orchestrator | 2026-01-05 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:36.069051 | orchestrator | 2026-01-05 01:02:36 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:36.071376 | orchestrator | 2026-01-05 01:02:36 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:36.072908 | orchestrator | 2026-01-05 01:02:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:36.074492 | orchestrator | 2026-01-05 01:02:36 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:36.075447 | orchestrator | 2026-01-05 01:02:36 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:36.075491 | orchestrator | 2026-01-05 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:39.117907 | orchestrator | 2026-01-05 01:02:39 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:39.120134 | orchestrator | 2026-01-05 01:02:39 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:39.121020 | orchestrator | 2026-01-05 01:02:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:39.122267 | orchestrator | 2026-01-05 01:02:39 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:39.124641 | orchestrator | 2026-01-05 01:02:39 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:39.124811 | orchestrator | 2026-01-05 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:42.172241 | orchestrator | 2026-01-05 01:02:42 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:42.173817 | orchestrator | 2026-01-05 01:02:42 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:42.175233 | orchestrator | 2026-01-05 01:02:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:42.177570 | orchestrator | 2026-01-05 01:02:42 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:42.179263 | orchestrator | 2026-01-05 01:02:42 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:42.179306 | orchestrator | 2026-01-05 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:45.226893 | orchestrator | 2026-01-05 01:02:45 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:45.229509 | orchestrator | 2026-01-05 01:02:45 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:45.232200 | orchestrator | 2026-01-05 01:02:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:45.234202 | orchestrator | 2026-01-05 01:02:45 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:45.236298 | orchestrator | 2026-01-05 01:02:45 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:45.236341 | orchestrator | 2026-01-05 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:48.281862 | orchestrator | 2026-01-05 01:02:48 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:48.284930 | orchestrator | 2026-01-05 01:02:48 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:48.289291 | orchestrator | 2026-01-05 01:02:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:48.291043 | orchestrator | 2026-01-05 01:02:48 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:48.293109 | orchestrator | 2026-01-05 01:02:48 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:48.293178 | orchestrator | 2026-01-05 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:51.335913 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:51.336826 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:51.338152 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:51.338985 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state STARTED 2026-01-05 01:02:51.339625 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:51.339649 | orchestrator | 2026-01-05 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:54.387257 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:54.387354 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:54.387367 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:54.388247 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task 4a21c9b8-6a21-441e-9014-1bc598232371 is in state SUCCESS 2026-01-05 01:02:54.389749 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:54.389807 | orchestrator | 2026-01-05 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:57.451324 | orchestrator | 2026-01-05 01:02:57 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:02:57.454434 | orchestrator | 2026-01-05 01:02:57 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:02:57.456640 | orchestrator | 2026-01-05 01:02:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:02:57.458722 | orchestrator | 2026-01-05 01:02:57 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:02:57.458797 | orchestrator | 2026-01-05 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:00.503214 | orchestrator | 2026-01-05 01:03:00 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state STARTED 2026-01-05 01:03:00.504598 | orchestrator | 2026-01-05 01:03:00 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:00.505815 | orchestrator | 2026-01-05 01:03:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:00.506530 | orchestrator | 2026-01-05 01:03:00 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state STARTED 2026-01-05 01:03:00.506582 | orchestrator | 2026-01-05 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:03.569802 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:03.570993 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task bfc034be-f89c-4e2a-839d-39396ac43ccf is in state SUCCESS 2026-01-05 01:03:03.571776 | orchestrator | 2026-01-05 01:03:03.571812 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:03:03.571825 | orchestrator | 2.16.14 2026-01-05 01:03:03.571839 | orchestrator | 2026-01-05 01:03:03.571850 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-05 01:03:03.571863 | orchestrator | 2026-01-05 01:03:03.571874 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-05 01:03:03.571886 | orchestrator | Monday 05 January 2026 01:01:18 +0000 (0:00:00.247) 0:00:00.247 ******** 2026-01-05 01:03:03.571924 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.571965 | orchestrator | 2026-01-05 01:03:03.571977 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-05 01:03:03.571988 | orchestrator | Monday 05 January 2026 01:01:20 +0000 (0:00:01.890) 0:00:02.137 ******** 2026-01-05 01:03:03.571999 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.572010 | orchestrator | 2026-01-05 01:03:03.572021 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-05 01:03:03.572032 | orchestrator | Monday 05 January 2026 01:01:21 +0000 (0:00:01.067) 0:00:03.204 ******** 2026-01-05 01:03:03.572043 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.572054 | orchestrator | 2026-01-05 01:03:03.572065 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-05 01:03:03.572076 | orchestrator | Monday 05 January 2026 01:01:22 +0000 (0:00:00.964) 0:00:04.169 ******** 2026-01-05 01:03:03.572087 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.572098 | orchestrator | 2026-01-05 01:03:03.572109 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-05 01:03:03.572120 | orchestrator | Monday 05 January 2026 01:01:23 +0000 (0:00:01.107) 0:00:05.276 ******** 2026-01-05 01:03:03.572130 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.572141 | orchestrator | 2026-01-05 01:03:03.572168 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-05 01:03:03.572179 | orchestrator | Monday 05 January 2026 01:01:24 +0000 (0:00:00.973) 0:00:06.250 ******** 2026-01-05 01:03:03.572190 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.572201 | orchestrator | 2026-01-05 01:03:03.572212 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-05 01:03:03.572223 | orchestrator | Monday 05 January 2026 01:01:25 +0000 (0:00:00.915) 0:00:07.165 ******** 2026-01-05 01:03:03.572233 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.572244 | orchestrator | 2026-01-05 01:03:03.572255 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-05 01:03:03.572266 | orchestrator | Monday 05 January 2026 01:01:27 +0000 (0:00:01.159) 0:00:08.325 ******** 2026-01-05 01:03:03.572277 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.572288 | orchestrator | 2026-01-05 01:03:03.572299 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-05 01:03:03.572310 | orchestrator | Monday 05 January 2026 01:01:28 +0000 (0:00:01.102) 0:00:09.428 ******** 2026-01-05 01:03:03.572321 | orchestrator | changed: [testbed-manager] 2026-01-05 01:03:03.572332 | orchestrator | 2026-01-05 01:03:03.572342 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-05 01:03:03.572353 | orchestrator | Monday 05 January 2026 01:02:28 +0000 (0:01:00.652) 0:01:10.080 ******** 2026-01-05 01:03:03.572367 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:03:03.572380 | orchestrator | 2026-01-05 01:03:03.572394 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:03:03.572408 | orchestrator | 2026-01-05 01:03:03.572421 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:03:03.572435 | orchestrator | Monday 05 January 2026 01:02:28 +0000 (0:00:00.177) 0:01:10.257 ******** 2026-01-05 01:03:03.572447 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:03.572460 | orchestrator | 2026-01-05 01:03:03.572473 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:03:03.572487 | orchestrator | 2026-01-05 01:03:03.572501 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:03:03.572514 | orchestrator | Monday 05 January 2026 01:02:30 +0000 (0:00:01.597) 0:01:11.855 ******** 2026-01-05 01:03:03.572528 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:03.572541 | orchestrator | 2026-01-05 01:03:03.572556 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:03:03.572569 | orchestrator | 2026-01-05 01:03:03.572582 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:03:03.572604 | orchestrator | Monday 05 January 2026 01:02:41 +0000 (0:00:11.369) 0:01:23.224 ******** 2026-01-05 01:03:03.572618 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:03.572630 | orchestrator | 2026-01-05 01:03:03.572643 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:03:03.572677 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:03:03.572787 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.572806 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.572818 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.572829 | orchestrator | 2026-01-05 01:03:03.572840 | orchestrator | 2026-01-05 01:03:03.572851 | orchestrator | 2026-01-05 01:03:03.572862 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:03:03.572873 | orchestrator | Monday 05 January 2026 01:02:53 +0000 (0:00:11.322) 0:01:34.547 ******** 2026-01-05 01:03:03.572884 | orchestrator | =============================================================================== 2026-01-05 01:03:03.572895 | orchestrator | Create admin user ------------------------------------------------------ 60.65s 2026-01-05 01:03:03.572920 | orchestrator | Restart ceph manager service ------------------------------------------- 24.29s 2026-01-05 01:03:03.572932 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.89s 2026-01-05 01:03:03.572943 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.16s 2026-01-05 01:03:03.572953 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.11s 2026-01-05 01:03:03.572964 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.10s 2026-01-05 01:03:03.572975 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.07s 2026-01-05 01:03:03.572986 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.97s 2026-01-05 01:03:03.572997 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.96s 2026-01-05 01:03:03.573008 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.92s 2026-01-05 01:03:03.573019 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-01-05 01:03:03.573030 | orchestrator | 2026-01-05 01:03:03.573041 | orchestrator | 2026-01-05 01:03:03.573052 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:03:03.573063 | orchestrator | 2026-01-05 01:03:03.573074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:03:03.573085 | orchestrator | Monday 05 January 2026 01:01:53 +0000 (0:00:00.250) 0:00:00.250 ******** 2026-01-05 01:03:03.573096 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:03.573114 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:03.573133 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:03.573151 | orchestrator | 2026-01-05 01:03:03.573168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:03:03.573195 | orchestrator | Monday 05 January 2026 01:01:53 +0000 (0:00:00.278) 0:00:00.529 ******** 2026-01-05 01:03:03.573214 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-05 01:03:03.573233 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-05 01:03:03.573244 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-05 01:03:03.573255 | orchestrator | 2026-01-05 01:03:03.573266 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-05 01:03:03.573277 | orchestrator | 2026-01-05 01:03:03.573288 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-05 01:03:03.573309 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:00.405) 0:00:00.934 ******** 2026-01-05 01:03:03.573320 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:03.573332 | orchestrator | 2026-01-05 01:03:03.573343 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-01-05 01:03:03.573354 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:00.515) 0:00:01.449 ******** 2026-01-05 01:03:03.573365 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-01-05 01:03:03.573377 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-01-05 01:03:03.573388 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-01-05 01:03:03.573399 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-01-05 01:03:03.573410 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-01-05 01:03:03.573464 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767574979.2413673-3831-76943162518770/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767574979.2413673-3831-76943162518770/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767574979.2413673-3831-76943162518770/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_y6d3t3kh/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_y6d3t3kh/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_y6d3t3kh/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_y6d3t3kh/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_y6d3t3kh/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:03:03.573493 | orchestrator | 2026-01-05 01:03:03.573506 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:03:03.573520 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.573534 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.573624 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.573640 | orchestrator | 2026-01-05 01:03:03.573652 | orchestrator | 2026-01-05 01:03:03.573685 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:03:03.573698 | orchestrator | Monday 05 January 2026 01:03:00 +0000 (0:01:05.855) 0:01:07.305 ******** 2026-01-05 01:03:03.573719 | orchestrator | =============================================================================== 2026-01-05 01:03:03.573737 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 65.86s 2026-01-05 01:03:03.573756 | orchestrator | placement : include_tasks ----------------------------------------------- 0.52s 2026-01-05 01:03:03.573775 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-01-05 01:03:03.573792 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-01-05 01:03:03.573811 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:03.574607 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:03.575844 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:03.576913 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task 430d1f53-294f-46ab-9f43-de1dc61d5784 is in state SUCCESS 2026-01-05 01:03:03.577740 | orchestrator | 2026-01-05 01:03:03.577822 | orchestrator | 2026-01-05 01:03:03.577837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:03:03.577850 | orchestrator | 2026-01-05 01:03:03.577860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:03:03.577871 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:00.235) 0:00:00.235 ******** 2026-01-05 01:03:03.577881 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:03.577892 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:03.577902 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:03.577912 | orchestrator | 2026-01-05 01:03:03.577922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:03:03.577936 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:00.278) 0:00:00.514 ******** 2026-01-05 01:03:03.577954 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-05 01:03:03.577971 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-05 01:03:03.577988 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-05 01:03:03.578005 | orchestrator | 2026-01-05 01:03:03.578092 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-05 01:03:03.578127 | orchestrator | 2026-01-05 01:03:03.578151 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-05 01:03:03.578161 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:00.383) 0:00:00.897 ******** 2026-01-05 01:03:03.578171 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:03.578183 | orchestrator | 2026-01-05 01:03:03.578193 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-01-05 01:03:03.578203 | orchestrator | Monday 05 January 2026 01:01:55 +0000 (0:00:00.477) 0:00:01.375 ******** 2026-01-05 01:03:03.578213 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-01-05 01:03:03.578224 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-01-05 01:03:03.578234 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-01-05 01:03:03.578244 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-01-05 01:03:03.578254 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-01-05 01:03:03.578369 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767574979.7808635-3852-55016760837147/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767574979.7808635-3852-55016760837147/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767574979.7808635-3852-55016760837147/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_regi98cj/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_regi98cj/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_regi98cj/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_regi98cj/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_regi98cj/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:03:03.578395 | orchestrator | 2026-01-05 01:03:03.578406 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:03:03.578416 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.578428 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.578440 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:03:03.578450 | orchestrator | 2026-01-05 01:03:03.578460 | orchestrator | 2026-01-05 01:03:03.578470 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:03:03.578480 | orchestrator | Monday 05 January 2026 01:03:01 +0000 (0:01:05.904) 0:01:07.280 ******** 2026-01-05 01:03:03.578490 | orchestrator | =============================================================================== 2026-01-05 01:03:03.578500 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 65.90s 2026-01-05 01:03:03.578510 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.48s 2026-01-05 01:03:03.578519 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-01-05 01:03:03.578530 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-01-05 01:03:03.578540 | orchestrator | 2026-01-05 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:06.641088 | orchestrator | 2026-01-05 01:03:06 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:06.641321 | orchestrator | 2026-01-05 01:03:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:06.642179 | orchestrator | 2026-01-05 01:03:06 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:06.643255 | orchestrator | 2026-01-05 01:03:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:06.643338 | orchestrator | 2026-01-05 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:09.691337 | orchestrator | 2026-01-05 01:03:09 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:09.691422 | orchestrator | 2026-01-05 01:03:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:09.693065 | orchestrator | 2026-01-05 01:03:09 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:09.693896 | orchestrator | 2026-01-05 01:03:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:09.693955 | orchestrator | 2026-01-05 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:12.733208 | orchestrator | 2026-01-05 01:03:12 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:12.734741 | orchestrator | 2026-01-05 01:03:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:12.738569 | orchestrator | 2026-01-05 01:03:12 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:12.739889 | orchestrator | 2026-01-05 01:03:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:12.739922 | orchestrator | 2026-01-05 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:15.779068 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:15.780116 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:15.781977 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:15.783279 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:15.783338 | orchestrator | 2026-01-05 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:18.817036 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:18.817546 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:18.819162 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:18.820368 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:18.820411 | orchestrator | 2026-01-05 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:21.872155 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:21.872252 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:21.873320 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:21.875320 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:21.875392 | orchestrator | 2026-01-05 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:24.924897 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:24.925002 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:24.925937 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:24.926749 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:24.927106 | orchestrator | 2026-01-05 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:27.959064 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:27.960036 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:27.961375 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:27.965402 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:27.966658 | orchestrator | 2026-01-05 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:31.014157 | orchestrator | 2026-01-05 01:03:31 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:31.016455 | orchestrator | 2026-01-05 01:03:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:31.019433 | orchestrator | 2026-01-05 01:03:31 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:31.021587 | orchestrator | 2026-01-05 01:03:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:31.021742 | orchestrator | 2026-01-05 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:34.070142 | orchestrator | 2026-01-05 01:03:34 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:34.070227 | orchestrator | 2026-01-05 01:03:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:34.070973 | orchestrator | 2026-01-05 01:03:34 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:34.071929 | orchestrator | 2026-01-05 01:03:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:34.071963 | orchestrator | 2026-01-05 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:37.102390 | orchestrator | 2026-01-05 01:03:37 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:37.105645 | orchestrator | 2026-01-05 01:03:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:37.108405 | orchestrator | 2026-01-05 01:03:37 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:37.111595 | orchestrator | 2026-01-05 01:03:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:37.111717 | orchestrator | 2026-01-05 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:40.161492 | orchestrator | 2026-01-05 01:03:40 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:40.163793 | orchestrator | 2026-01-05 01:03:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:40.165411 | orchestrator | 2026-01-05 01:03:40 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:40.167181 | orchestrator | 2026-01-05 01:03:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:40.167892 | orchestrator | 2026-01-05 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:43.209554 | orchestrator | 2026-01-05 01:03:43 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:43.209696 | orchestrator | 2026-01-05 01:03:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:43.211509 | orchestrator | 2026-01-05 01:03:43 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:43.213759 | orchestrator | 2026-01-05 01:03:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:43.213827 | orchestrator | 2026-01-05 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:46.259206 | orchestrator | 2026-01-05 01:03:46 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:46.261061 | orchestrator | 2026-01-05 01:03:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:46.263211 | orchestrator | 2026-01-05 01:03:46 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:46.264577 | orchestrator | 2026-01-05 01:03:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:46.264847 | orchestrator | 2026-01-05 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:49.301872 | orchestrator | 2026-01-05 01:03:49 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:49.302702 | orchestrator | 2026-01-05 01:03:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:49.303741 | orchestrator | 2026-01-05 01:03:49 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:49.305018 | orchestrator | 2026-01-05 01:03:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:49.305055 | orchestrator | 2026-01-05 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:52.381077 | orchestrator | 2026-01-05 01:03:52 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:52.383319 | orchestrator | 2026-01-05 01:03:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:52.387765 | orchestrator | 2026-01-05 01:03:52 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:52.389334 | orchestrator | 2026-01-05 01:03:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:52.389508 | orchestrator | 2026-01-05 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:55.429638 | orchestrator | 2026-01-05 01:03:55 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:55.435096 | orchestrator | 2026-01-05 01:03:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:55.437947 | orchestrator | 2026-01-05 01:03:55 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:55.438723 | orchestrator | 2026-01-05 01:03:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:55.438747 | orchestrator | 2026-01-05 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:58.481539 | orchestrator | 2026-01-05 01:03:58 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:03:58.483557 | orchestrator | 2026-01-05 01:03:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:03:58.485914 | orchestrator | 2026-01-05 01:03:58 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:03:58.488484 | orchestrator | 2026-01-05 01:03:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:03:58.488548 | orchestrator | 2026-01-05 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:01.546091 | orchestrator | 2026-01-05 01:04:01 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:01.546214 | orchestrator | 2026-01-05 01:04:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:01.546230 | orchestrator | 2026-01-05 01:04:01 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:01.546243 | orchestrator | 2026-01-05 01:04:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:01.546255 | orchestrator | 2026-01-05 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:04.589308 | orchestrator | 2026-01-05 01:04:04 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:04.590280 | orchestrator | 2026-01-05 01:04:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:04.590976 | orchestrator | 2026-01-05 01:04:04 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:04.592045 | orchestrator | 2026-01-05 01:04:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:04.592080 | orchestrator | 2026-01-05 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:07.634890 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:07.635517 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:07.638695 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:07.639941 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:07.639998 | orchestrator | 2026-01-05 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:10.698351 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:10.699344 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:10.702177 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:10.703864 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:10.703981 | orchestrator | 2026-01-05 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:13.748269 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:13.749436 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:13.751630 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:13.751772 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:13.751955 | orchestrator | 2026-01-05 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:16.803835 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:16.806369 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:16.807880 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:16.810235 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:16.810284 | orchestrator | 2026-01-05 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:19.860049 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:19.863938 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:19.869029 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:19.870826 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:19.871131 | orchestrator | 2026-01-05 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:22.920313 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:22.921823 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:22.923760 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:22.924876 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:22.925059 | orchestrator | 2026-01-05 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:25.968896 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:25.972862 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:25.975743 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:25.977247 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:25.977295 | orchestrator | 2026-01-05 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:29.031996 | orchestrator | 2026-01-05 01:04:29 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:29.033693 | orchestrator | 2026-01-05 01:04:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:29.035447 | orchestrator | 2026-01-05 01:04:29 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:29.037696 | orchestrator | 2026-01-05 01:04:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:29.037749 | orchestrator | 2026-01-05 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:32.093425 | orchestrator | 2026-01-05 01:04:32 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:32.095870 | orchestrator | 2026-01-05 01:04:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:32.098330 | orchestrator | 2026-01-05 01:04:32 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:32.100618 | orchestrator | 2026-01-05 01:04:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:32.100659 | orchestrator | 2026-01-05 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:35.147673 | orchestrator | 2026-01-05 01:04:35 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:35.148776 | orchestrator | 2026-01-05 01:04:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:35.150153 | orchestrator | 2026-01-05 01:04:35 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:35.151998 | orchestrator | 2026-01-05 01:04:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:35.152121 | orchestrator | 2026-01-05 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:38.215121 | orchestrator | 2026-01-05 01:04:38 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:38.215204 | orchestrator | 2026-01-05 01:04:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:38.215210 | orchestrator | 2026-01-05 01:04:38 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:38.215215 | orchestrator | 2026-01-05 01:04:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:38.215220 | orchestrator | 2026-01-05 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:41.255946 | orchestrator | 2026-01-05 01:04:41 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:41.257024 | orchestrator | 2026-01-05 01:04:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:41.259259 | orchestrator | 2026-01-05 01:04:41 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:41.261385 | orchestrator | 2026-01-05 01:04:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:41.262082 | orchestrator | 2026-01-05 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:44.312088 | orchestrator | 2026-01-05 01:04:44 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:44.312153 | orchestrator | 2026-01-05 01:04:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:44.312610 | orchestrator | 2026-01-05 01:04:44 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:44.314260 | orchestrator | 2026-01-05 01:04:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:44.314301 | orchestrator | 2026-01-05 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:47.358012 | orchestrator | 2026-01-05 01:04:47 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:47.359453 | orchestrator | 2026-01-05 01:04:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:47.360868 | orchestrator | 2026-01-05 01:04:47 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:47.362520 | orchestrator | 2026-01-05 01:04:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:47.362574 | orchestrator | 2026-01-05 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:50.409418 | orchestrator | 2026-01-05 01:04:50 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:50.409587 | orchestrator | 2026-01-05 01:04:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:50.409835 | orchestrator | 2026-01-05 01:04:50 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:50.413077 | orchestrator | 2026-01-05 01:04:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:50.413152 | orchestrator | 2026-01-05 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:53.465304 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:53.467451 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:53.469714 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:53.473137 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:53.473268 | orchestrator | 2026-01-05 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:56.514235 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:56.516158 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:56.518696 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:56.520804 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:56.520839 | orchestrator | 2026-01-05 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:59.568994 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:04:59.571180 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:04:59.573397 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:04:59.575442 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:04:59.575510 | orchestrator | 2026-01-05 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:02.618846 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:02.620528 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:02.623408 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:02.625542 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:02.625647 | orchestrator | 2026-01-05 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:05.675670 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:05.677778 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:05.680146 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:05.681996 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:05.682144 | orchestrator | 2026-01-05 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:08.727188 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:08.729346 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:08.731176 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:08.733169 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:08.733231 | orchestrator | 2026-01-05 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:11.765976 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:11.766185 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:11.767140 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:11.768143 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:11.768173 | orchestrator | 2026-01-05 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:14.810527 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:14.810625 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:14.810633 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:14.810638 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:14.810643 | orchestrator | 2026-01-05 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:17.854307 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:17.857640 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:17.858545 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:17.859732 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:17.859786 | orchestrator | 2026-01-05 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:20.903938 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:20.905390 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:20.907096 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:20.908696 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:20.908742 | orchestrator | 2026-01-05 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:23.949890 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:23.950993 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:23.952496 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:23.954092 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:23.954157 | orchestrator | 2026-01-05 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:27.016880 | orchestrator | 2026-01-05 01:05:27 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:27.018794 | orchestrator | 2026-01-05 01:05:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:27.020026 | orchestrator | 2026-01-05 01:05:27 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:27.022225 | orchestrator | 2026-01-05 01:05:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:27.023046 | orchestrator | 2026-01-05 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:30.066075 | orchestrator | 2026-01-05 01:05:30 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:30.068043 | orchestrator | 2026-01-05 01:05:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:30.069143 | orchestrator | 2026-01-05 01:05:30 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:30.071315 | orchestrator | 2026-01-05 01:05:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:30.071364 | orchestrator | 2026-01-05 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:33.111828 | orchestrator | 2026-01-05 01:05:33 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:33.111914 | orchestrator | 2026-01-05 01:05:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:33.113251 | orchestrator | 2026-01-05 01:05:33 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:33.113277 | orchestrator | 2026-01-05 01:05:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:33.113285 | orchestrator | 2026-01-05 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:36.153112 | orchestrator | 2026-01-05 01:05:36 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:36.154325 | orchestrator | 2026-01-05 01:05:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:36.157143 | orchestrator | 2026-01-05 01:05:36 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:36.160029 | orchestrator | 2026-01-05 01:05:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:36.160107 | orchestrator | 2026-01-05 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:39.214141 | orchestrator | 2026-01-05 01:05:39 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:39.215329 | orchestrator | 2026-01-05 01:05:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:39.215978 | orchestrator | 2026-01-05 01:05:39 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:39.216965 | orchestrator | 2026-01-05 01:05:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:39.216991 | orchestrator | 2026-01-05 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:42.265782 | orchestrator | 2026-01-05 01:05:42 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:42.268866 | orchestrator | 2026-01-05 01:05:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:42.271012 | orchestrator | 2026-01-05 01:05:42 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:42.273631 | orchestrator | 2026-01-05 01:05:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:42.273697 | orchestrator | 2026-01-05 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:45.317188 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:45.318226 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:45.319737 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:45.321204 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:45.321269 | orchestrator | 2026-01-05 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:48.355591 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:48.357864 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:48.358954 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:48.360054 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:48.360105 | orchestrator | 2026-01-05 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:51.410912 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:51.413915 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:51.416096 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:51.418115 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:51.418156 | orchestrator | 2026-01-05 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:54.475850 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state STARTED 2026-01-05 01:05:54.477854 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:54.479821 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:54.481312 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:54.481397 | orchestrator | 2026-01-05 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:57.530704 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task fbf3d4e7-655d-490c-b134-767dff8af23e is in state SUCCESS 2026-01-05 01:05:57.532887 | orchestrator | 2026-01-05 01:05:57.532943 | orchestrator | 2026-01-05 01:05:57.532953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:05:57.533010 | orchestrator | 2026-01-05 01:05:57.533023 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:05:57.533035 | orchestrator | Monday 05 January 2026 01:03:05 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-01-05 01:05:57.533047 | orchestrator | ok: [testbed-manager] 2026-01-05 01:05:57.533085 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:05:57.533098 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:05:57.533110 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:05:57.533121 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:05:57.533132 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:05:57.533139 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:05:57.533146 | orchestrator | 2026-01-05 01:05:57.533153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:05:57.533194 | orchestrator | Monday 05 January 2026 01:03:06 +0000 (0:00:00.872) 0:00:01.170 ******** 2026-01-05 01:05:57.533202 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-05 01:05:57.533210 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-05 01:05:57.533216 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-05 01:05:57.533223 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-05 01:05:57.533230 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-05 01:05:57.533237 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-05 01:05:57.533243 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-05 01:05:57.533250 | orchestrator | 2026-01-05 01:05:57.533257 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-05 01:05:57.533263 | orchestrator | 2026-01-05 01:05:57.533270 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-05 01:05:57.533276 | orchestrator | Monday 05 January 2026 01:03:07 +0000 (0:00:00.884) 0:00:02.054 ******** 2026-01-05 01:05:57.533286 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:05:57.533299 | orchestrator | 2026-01-05 01:05:57.533310 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-05 01:05:57.533321 | orchestrator | Monday 05 January 2026 01:03:08 +0000 (0:00:01.589) 0:00:03.644 ******** 2026-01-05 01:05:57.533374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-05 01:05:57.533395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533605 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:05:57.533714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533775 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.533831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.533854 | orchestrator | 2026-01-05 01:05:57.533866 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-05 01:05:57.533878 | orchestrator | Monday 05 January 2026 01:03:11 +0000 (0:00:02.950) 0:00:06.595 ******** 2026-01-05 01:05:57.533890 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:05:57.533901 | orchestrator | 2026-01-05 01:05:57.533912 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-05 01:05:57.533922 | orchestrator | Monday 05 January 2026 01:03:13 +0000 (0:00:01.420) 0:00:08.015 ******** 2026-01-05 01:05:57.533940 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-05 01:05:57.533963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.533996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.534008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.534098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.534111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.534137 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.534159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534241 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534372 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:05:57.534400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.534440 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.534485 | orchestrator | 2026-01-05 01:05:57.534496 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-05 01:05:57.534514 | orchestrator | Monday 05 January 2026 01:03:18 +0000 (0:00:05.641) 0:00:13.656 ******** 2026-01-05 01:05:57.534532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.534545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-05 01:05:57.534556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.534574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534595 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.534606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.534628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534651 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.534664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.534695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.534707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534733 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:05:57.534748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.534760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534771 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.534784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534870 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.534932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534941 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.534948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.534963 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.534970 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.535026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.535044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.535055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535108 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.535159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535175 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.535188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535200 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.535208 | orchestrator | 2026-01-05 01:05:57.535214 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-05 01:05:57.535222 | orchestrator | Monday 05 January 2026 01:03:21 +0000 (0:00:02.288) 0:00:15.945 ******** 2026-01-05 01:05:57.535235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-05 01:05:57.535243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.535256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535264 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.535277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.535292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535323 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:05:57.535330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535451 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.535466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.535474 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535488 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.535500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.535515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535557 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.535564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.535582 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.535588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.535594 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.535601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.535608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.536097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.536127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.536139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.536146 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.536153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.536160 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.536166 | orchestrator | 2026-01-05 01:05:57.536173 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-05 01:05:57.536180 | orchestrator | Monday 05 January 2026 01:03:23 +0000 (0:00:02.584) 0:00:18.529 ******** 2026-01-05 01:05:57.536194 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-05 01:05:57.536202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.536225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.536232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.536239 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.536245 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.536255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.536262 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.536269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536299 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536377 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:05:57.536386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536417 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.536454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.536474 | orchestrator | 2026-01-05 01:05:57.536480 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-05 01:05:57.536487 | orchestrator | Monday 05 January 2026 01:03:29 +0000 (0:00:05.905) 0:00:24.435 ******** 2026-01-05 01:05:57.536494 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:05:57.536500 | orchestrator | 2026-01-05 01:05:57.536506 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-05 01:05:57.536513 | orchestrator | Monday 05 January 2026 01:03:31 +0000 (0:00:01.411) 0:00:25.846 ******** 2026-01-05 01:05:57.536519 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.536526 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.536532 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.536546 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.536553 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.536559 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.536565 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.536571 | orchestrator | 2026-01-05 01:05:57.536577 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-05 01:05:57.536584 | orchestrator | Monday 05 January 2026 01:03:31 +0000 (0:00:00.821) 0:00:26.667 ******** 2026-01-05 01:05:57.536590 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:05:57.536596 | orchestrator | 2026-01-05 01:05:57.536602 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-05 01:05:57.536608 | orchestrator | Monday 05 January 2026 01:03:32 +0000 (0:00:00.837) 0:00:27.505 ******** 2026-01-05 01:05:57.536615 | orchestrator | [WARNING]: Skipped 2026-01-05 01:05:57.536622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536629 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-05 01:05:57.536635 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536642 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-05 01:05:57.536648 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:05:57.536654 | orchestrator | [WARNING]: Skipped 2026-01-05 01:05:57.536660 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536666 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-05 01:05:57.536673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536679 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-05 01:05:57.536685 | orchestrator | [WARNING]: Skipped 2026-01-05 01:05:57.536692 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536701 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-05 01:05:57.536709 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536716 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-05 01:05:57.536724 | orchestrator | [WARNING]: Skipped 2026-01-05 01:05:57.536732 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536743 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-05 01:05:57.536750 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536757 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-05 01:05:57.536765 | orchestrator | [WARNING]: Skipped 2026-01-05 01:05:57.536772 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536780 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-05 01:05:57.536787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536795 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-05 01:05:57.536802 | orchestrator | [WARNING]: Skipped 2026-01-05 01:05:57.536810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536834 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-05 01:05:57.536842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536849 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-05 01:05:57.536884 | orchestrator | [WARNING]: Skipped 2026-01-05 01:05:57.536892 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536899 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-05 01:05:57.536907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:05:57.536924 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-05 01:05:57.536931 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:05:57.536938 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 01:05:57.536946 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 01:05:57.536953 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 01:05:57.536960 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:05:57.536967 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 01:05:57.536974 | orchestrator | 2026-01-05 01:05:57.536982 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-05 01:05:57.536989 | orchestrator | Monday 05 January 2026 01:03:34 +0000 (0:00:01.787) 0:00:29.292 ******** 2026-01-05 01:05:57.536997 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:05:57.537009 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:05:57.537020 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.537030 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:05:57.537039 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.537048 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.537068 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:05:57.537078 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537093 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:05:57.537104 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537114 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:05:57.537125 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537136 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-05 01:05:57.537146 | orchestrator | 2026-01-05 01:05:57.537156 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-05 01:05:57.537166 | orchestrator | Monday 05 January 2026 01:03:49 +0000 (0:00:15.481) 0:00:44.773 ******** 2026-01-05 01:05:57.537172 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:05:57.537179 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.537185 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:05:57.537191 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.537197 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:05:57.537204 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.537210 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:05:57.537216 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537222 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:05:57.537228 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537234 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:05:57.537240 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537247 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-05 01:05:57.537253 | orchestrator | 2026-01-05 01:05:57.537259 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-05 01:05:57.537265 | orchestrator | Monday 05 January 2026 01:03:53 +0000 (0:00:03.472) 0:00:48.246 ******** 2026-01-05 01:05:57.537272 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:05:57.537284 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.537296 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:05:57.537303 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:05:57.537309 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.537316 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.537322 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:05:57.537328 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537334 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-05 01:05:57.537341 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:05:57.537364 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537371 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:05:57.537377 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537383 | orchestrator | 2026-01-05 01:05:57.537390 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-05 01:05:57.537396 | orchestrator | Monday 05 January 2026 01:03:55 +0000 (0:00:01.652) 0:00:49.898 ******** 2026-01-05 01:05:57.537402 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:05:57.537408 | orchestrator | 2026-01-05 01:05:57.537415 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-05 01:05:57.537421 | orchestrator | Monday 05 January 2026 01:03:55 +0000 (0:00:00.795) 0:00:50.693 ******** 2026-01-05 01:05:57.537427 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.537433 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.537440 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.537446 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.537452 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537458 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537464 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537470 | orchestrator | 2026-01-05 01:05:57.537477 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-05 01:05:57.537483 | orchestrator | Monday 05 January 2026 01:03:56 +0000 (0:00:00.725) 0:00:51.419 ******** 2026-01-05 01:05:57.537489 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.537495 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537501 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537508 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537514 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:57.537520 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:05:57.537526 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:05:57.537532 | orchestrator | 2026-01-05 01:05:57.537538 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-05 01:05:57.537545 | orchestrator | Monday 05 January 2026 01:03:58 +0000 (0:00:02.263) 0:00:53.682 ******** 2026-01-05 01:05:57.537555 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:05:57.537562 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:05:57.537568 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.537574 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:05:57.537581 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:05:57.537591 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.537598 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.537608 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.537615 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:05:57.537621 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537627 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:05:57.537633 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537640 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:05:57.537646 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537652 | orchestrator | 2026-01-05 01:05:57.537658 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-05 01:05:57.537665 | orchestrator | Monday 05 January 2026 01:04:00 +0000 (0:00:01.680) 0:00:55.362 ******** 2026-01-05 01:05:57.537671 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:05:57.537677 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.537684 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:05:57.537690 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.537696 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:05:57.537702 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.537709 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:05:57.537715 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537721 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:05:57.537727 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537737 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:05:57.537744 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537750 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-05 01:05:57.537756 | orchestrator | 2026-01-05 01:05:57.537762 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-05 01:05:57.537769 | orchestrator | Monday 05 January 2026 01:04:02 +0000 (0:00:01.633) 0:00:56.996 ******** 2026-01-05 01:05:57.537775 | orchestrator | [WARNING]: Skipped 2026-01-05 01:05:57.537782 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-05 01:05:57.537788 | orchestrator | due to this access issue: 2026-01-05 01:05:57.537794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-05 01:05:57.537800 | orchestrator | not a directory 2026-01-05 01:05:57.537807 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:05:57.537813 | orchestrator | 2026-01-05 01:05:57.537819 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-05 01:05:57.537825 | orchestrator | Monday 05 January 2026 01:04:03 +0000 (0:00:01.217) 0:00:58.213 ******** 2026-01-05 01:05:57.537831 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.537838 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.537844 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.537850 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.537856 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537862 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537868 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537875 | orchestrator | 2026-01-05 01:05:57.537881 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-05 01:05:57.537892 | orchestrator | Monday 05 January 2026 01:04:04 +0000 (0:00:00.902) 0:00:59.115 ******** 2026-01-05 01:05:57.537899 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.537905 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.537911 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.537917 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.537923 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.537929 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.537936 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.537942 | orchestrator | 2026-01-05 01:05:57.537948 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-01-05 01:05:57.537954 | orchestrator | Monday 05 January 2026 01:04:05 +0000 (0:00:00.729) 0:00:59.844 ******** 2026-01-05 01:05:57.537964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.537973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.537979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.537992 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-05 01:05:57.538000 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.538054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.538064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.538075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538095 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:05:57.538107 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538132 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538195 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:05:57.538206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:05:57.538226 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:05:57.538260 | orchestrator | 2026-01-05 01:05:57.538267 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-01-05 01:05:57.538273 | orchestrator | Monday 05 January 2026 01:04:09 +0000 (0:00:04.568) 0:01:04.413 ******** 2026-01-05 01:05:57.538279 | orchestrator | changed: [testbed-manager] => { 2026-01-05 01:05:57.538285 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:05:57.538292 | orchestrator | } 2026-01-05 01:05:57.538298 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 01:05:57.538305 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:05:57.538311 | orchestrator | } 2026-01-05 01:05:57.538317 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 01:05:57.538323 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:05:57.538329 | orchestrator | } 2026-01-05 01:05:57.538336 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 01:05:57.538342 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:05:57.538363 | orchestrator | } 2026-01-05 01:05:57.538369 | orchestrator | changed: [testbed-node-3] => { 2026-01-05 01:05:57.538375 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:05:57.538382 | orchestrator | } 2026-01-05 01:05:57.538388 | orchestrator | changed: [testbed-node-4] => { 2026-01-05 01:05:57.538398 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:05:57.538404 | orchestrator | } 2026-01-05 01:05:57.538410 | orchestrator | changed: [testbed-node-5] => { 2026-01-05 01:05:57.538417 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:05:57.538423 | orchestrator | } 2026-01-05 01:05:57.538429 | orchestrator | 2026-01-05 01:05:57.538435 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 01:05:57.538441 | orchestrator | Monday 05 January 2026 01:04:10 +0000 (0:00:00.982) 0:01:05.396 ******** 2026-01-05 01:05:57.538448 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-05 01:05:57.538467 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.538474 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538481 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:05:57.538488 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.538505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.538550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538563 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.538572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538590 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:57.538596 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:57.538603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.538613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:05:57.538639 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:57.538650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.538657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538676 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:05:57.538683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.538694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538707 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:05:57.538713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:05:57.538720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:05:57.538741 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:05:57.538748 | orchestrator | 2026-01-05 01:05:57.538754 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-05 01:05:57.538760 | orchestrator | Monday 05 January 2026 01:04:12 +0000 (0:00:02.028) 0:01:07.424 ******** 2026-01-05 01:05:57.538767 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 01:05:57.538773 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:05:57.538779 | orchestrator | 2026-01-05 01:05:57.538785 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:05:57.538792 | orchestrator | Monday 05 January 2026 01:04:13 +0000 (0:00:01.230) 0:01:08.655 ******** 2026-01-05 01:05:57.538798 | orchestrator | 2026-01-05 01:05:57.538804 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:05:57.538810 | orchestrator | Monday 05 January 2026 01:04:13 +0000 (0:00:00.077) 0:01:08.733 ******** 2026-01-05 01:05:57.538816 | orchestrator | 2026-01-05 01:05:57.538823 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:05:57.538829 | orchestrator | Monday 05 January 2026 01:04:14 +0000 (0:00:00.085) 0:01:08.818 ******** 2026-01-05 01:05:57.538835 | orchestrator | 2026-01-05 01:05:57.538841 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:05:57.538847 | orchestrator | Monday 05 January 2026 01:04:14 +0000 (0:00:00.073) 0:01:08.891 ******** 2026-01-05 01:05:57.538853 | orchestrator | 2026-01-05 01:05:57.538860 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:05:57.538866 | orchestrator | Monday 05 January 2026 01:04:14 +0000 (0:00:00.072) 0:01:08.964 ******** 2026-01-05 01:05:57.538872 | orchestrator | 2026-01-05 01:05:57.538878 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:05:57.538884 | orchestrator | Monday 05 January 2026 01:04:14 +0000 (0:00:00.064) 0:01:09.029 ******** 2026-01-05 01:05:57.538890 | orchestrator | 2026-01-05 01:05:57.538897 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:05:57.538903 | orchestrator | Monday 05 January 2026 01:04:14 +0000 (0:00:00.309) 0:01:09.338 ******** 2026-01-05 01:05:57.538909 | orchestrator | 2026-01-05 01:05:57.538915 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-05 01:05:57.538925 | orchestrator | Monday 05 January 2026 01:04:14 +0000 (0:00:00.101) 0:01:09.440 ******** 2026-01-05 01:05:57.538931 | orchestrator | changed: [testbed-manager] 2026-01-05 01:05:57.538937 | orchestrator | 2026-01-05 01:05:57.538944 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-05 01:05:57.538950 | orchestrator | Monday 05 January 2026 01:04:36 +0000 (0:00:21.505) 0:01:30.945 ******** 2026-01-05 01:05:57.538956 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:05:57.538963 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:05:57.538969 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:05:57.538975 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:05:57.538982 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:05:57.538988 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:57.538994 | orchestrator | changed: [testbed-manager] 2026-01-05 01:05:57.539000 | orchestrator | 2026-01-05 01:05:57.539007 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-05 01:05:57.539013 | orchestrator | Monday 05 January 2026 01:04:49 +0000 (0:00:13.668) 0:01:44.614 ******** 2026-01-05 01:05:57.539019 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:57.539025 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:05:57.539031 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:05:57.539037 | orchestrator | 2026-01-05 01:05:57.539044 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-05 01:05:57.539050 | orchestrator | Monday 05 January 2026 01:05:00 +0000 (0:00:10.394) 0:01:55.009 ******** 2026-01-05 01:05:57.539056 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:05:57.539067 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:05:57.539073 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:57.539080 | orchestrator | 2026-01-05 01:05:57.539086 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-05 01:05:57.539092 | orchestrator | Monday 05 January 2026 01:05:09 +0000 (0:00:09.637) 0:02:04.646 ******** 2026-01-05 01:05:57.539098 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:05:57.539104 | orchestrator | changed: [testbed-manager] 2026-01-05 01:05:57.539111 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:05:57.539117 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:05:57.539123 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:05:57.539129 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:57.539135 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:05:57.539141 | orchestrator | 2026-01-05 01:05:57.539147 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-05 01:05:57.539153 | orchestrator | Monday 05 January 2026 01:05:23 +0000 (0:00:13.195) 0:02:17.842 ******** 2026-01-05 01:05:57.539159 | orchestrator | changed: [testbed-manager] 2026-01-05 01:05:57.539165 | orchestrator | 2026-01-05 01:05:57.539172 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-05 01:05:57.539178 | orchestrator | Monday 05 January 2026 01:05:31 +0000 (0:00:08.457) 0:02:26.299 ******** 2026-01-05 01:05:57.539184 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:05:57.539190 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:05:57.539196 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:57.539203 | orchestrator | 2026-01-05 01:05:57.539209 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-05 01:05:57.539215 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:04.616) 0:02:30.916 ******** 2026-01-05 01:05:57.539221 | orchestrator | changed: [testbed-manager] 2026-01-05 01:05:57.539227 | orchestrator | 2026-01-05 01:05:57.539237 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-05 01:05:57.539243 | orchestrator | Monday 05 January 2026 01:05:46 +0000 (0:00:10.791) 0:02:41.708 ******** 2026-01-05 01:05:57.539249 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:05:57.539255 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:05:57.539261 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:05:57.539267 | orchestrator | 2026-01-05 01:05:57.539274 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:05:57.539280 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 01:05:57.539287 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:05:57.539293 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:05:57.539300 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:05:57.539306 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-05 01:05:57.539312 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-05 01:05:57.539318 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-05 01:05:57.539324 | orchestrator | 2026-01-05 01:05:57.539330 | orchestrator | 2026-01-05 01:05:57.539337 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:05:57.539343 | orchestrator | Monday 05 January 2026 01:05:56 +0000 (0:00:09.953) 0:02:51.661 ******** 2026-01-05 01:05:57.539396 | orchestrator | =============================================================================== 2026-01-05 01:05:57.539404 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.51s 2026-01-05 01:05:57.539410 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.48s 2026-01-05 01:05:57.539421 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.67s 2026-01-05 01:05:57.539427 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.20s 2026-01-05 01:05:57.539433 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.79s 2026-01-05 01:05:57.539439 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.39s 2026-01-05 01:05:57.539446 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.95s 2026-01-05 01:05:57.539452 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.64s 2026-01-05 01:05:57.539458 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.46s 2026-01-05 01:05:57.539464 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.91s 2026-01-05 01:05:57.539470 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.64s 2026-01-05 01:05:57.539476 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.62s 2026-01-05 01:05:57.539483 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.57s 2026-01-05 01:05:57.539489 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.47s 2026-01-05 01:05:57.539495 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.95s 2026-01-05 01:05:57.539501 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.58s 2026-01-05 01:05:57.539507 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.29s 2026-01-05 01:05:57.539513 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.26s 2026-01-05 01:05:57.539519 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.03s 2026-01-05 01:05:57.539526 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.79s 2026-01-05 01:05:57.539532 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:05:57.539538 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:05:57.539544 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:05:57.539551 | orchestrator | 2026-01-05 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:00.599000 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:00.601877 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:00.604310 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:00.606992 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:00.607081 | orchestrator | 2026-01-05 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:03.660694 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:03.662604 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:03.667213 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:03.669603 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:03.669791 | orchestrator | 2026-01-05 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:06.717883 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:06.719453 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:06.722483 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:06.725983 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:06.726078 | orchestrator | 2026-01-05 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:09.775824 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:09.777279 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:09.779776 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:09.780715 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:09.781022 | orchestrator | 2026-01-05 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:12.821481 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:12.824263 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:12.827187 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:12.829715 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:12.829947 | orchestrator | 2026-01-05 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:15.877804 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:15.878646 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:15.879929 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:15.881080 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:15.881537 | orchestrator | 2026-01-05 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:18.930925 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:18.937896 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:18.938693 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:18.939970 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:18.940007 | orchestrator | 2026-01-05 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:21.987568 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:21.989779 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:21.990898 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:21.991738 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:21.991971 | orchestrator | 2026-01-05 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:25.053440 | orchestrator | 2026-01-05 01:06:25 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:25.055590 | orchestrator | 2026-01-05 01:06:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:25.057205 | orchestrator | 2026-01-05 01:06:25 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:25.059622 | orchestrator | 2026-01-05 01:06:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:25.059701 | orchestrator | 2026-01-05 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:28.107286 | orchestrator | 2026-01-05 01:06:28 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:28.109539 | orchestrator | 2026-01-05 01:06:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:28.110732 | orchestrator | 2026-01-05 01:06:28 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:28.112012 | orchestrator | 2026-01-05 01:06:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:28.112048 | orchestrator | 2026-01-05 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:31.161917 | orchestrator | 2026-01-05 01:06:31 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:31.164122 | orchestrator | 2026-01-05 01:06:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:31.166420 | orchestrator | 2026-01-05 01:06:31 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:31.167990 | orchestrator | 2026-01-05 01:06:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:31.168054 | orchestrator | 2026-01-05 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:34.205679 | orchestrator | 2026-01-05 01:06:34 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:34.205839 | orchestrator | 2026-01-05 01:06:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:34.206808 | orchestrator | 2026-01-05 01:06:34 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:34.208226 | orchestrator | 2026-01-05 01:06:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:34.208261 | orchestrator | 2026-01-05 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:37.256616 | orchestrator | 2026-01-05 01:06:37 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:37.258301 | orchestrator | 2026-01-05 01:06:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:37.260267 | orchestrator | 2026-01-05 01:06:37 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:37.261724 | orchestrator | 2026-01-05 01:06:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:37.261847 | orchestrator | 2026-01-05 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:40.306934 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:40.309132 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:40.312176 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:40.314880 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:40.315068 | orchestrator | 2026-01-05 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:43.359873 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:43.361867 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:43.364673 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:43.365976 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:43.366072 | orchestrator | 2026-01-05 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:46.419747 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:46.421178 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:46.422994 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:46.424587 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:46.424836 | orchestrator | 2026-01-05 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:49.469288 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:49.471118 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:49.472573 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:49.474334 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:49.474382 | orchestrator | 2026-01-05 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:52.524590 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:52.526333 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:52.528970 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:52.531550 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:52.531589 | orchestrator | 2026-01-05 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:55.582480 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:55.584603 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:55.587302 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:55.589896 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:55.589940 | orchestrator | 2026-01-05 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:58.644199 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:06:58.647078 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:06:58.648843 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:06:58.650574 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:06:58.650640 | orchestrator | 2026-01-05 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:01.696718 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:01.698721 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:01.700874 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:01.703804 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:01.703873 | orchestrator | 2026-01-05 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:04.756640 | orchestrator | 2026-01-05 01:07:04 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:04.757828 | orchestrator | 2026-01-05 01:07:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:04.760083 | orchestrator | 2026-01-05 01:07:04 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:04.761829 | orchestrator | 2026-01-05 01:07:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:04.761897 | orchestrator | 2026-01-05 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:07.802138 | orchestrator | 2026-01-05 01:07:07 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:07.803354 | orchestrator | 2026-01-05 01:07:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:07.804075 | orchestrator | 2026-01-05 01:07:07 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:07.804967 | orchestrator | 2026-01-05 01:07:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:07.805004 | orchestrator | 2026-01-05 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:10.857380 | orchestrator | 2026-01-05 01:07:10 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:10.858791 | orchestrator | 2026-01-05 01:07:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:10.860268 | orchestrator | 2026-01-05 01:07:10 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:10.861725 | orchestrator | 2026-01-05 01:07:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:10.861766 | orchestrator | 2026-01-05 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:13.916094 | orchestrator | 2026-01-05 01:07:13 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:13.918623 | orchestrator | 2026-01-05 01:07:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:13.921466 | orchestrator | 2026-01-05 01:07:13 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:13.924217 | orchestrator | 2026-01-05 01:07:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:13.924354 | orchestrator | 2026-01-05 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:16.969613 | orchestrator | 2026-01-05 01:07:16 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:16.971557 | orchestrator | 2026-01-05 01:07:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:16.973612 | orchestrator | 2026-01-05 01:07:16 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:16.974769 | orchestrator | 2026-01-05 01:07:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:16.974987 | orchestrator | 2026-01-05 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:20.038414 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:20.041084 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:20.042376 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:20.043972 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:20.044030 | orchestrator | 2026-01-05 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:23.087454 | orchestrator | 2026-01-05 01:07:23 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:23.089203 | orchestrator | 2026-01-05 01:07:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:23.090808 | orchestrator | 2026-01-05 01:07:23 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:23.092405 | orchestrator | 2026-01-05 01:07:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:23.092457 | orchestrator | 2026-01-05 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:26.149031 | orchestrator | 2026-01-05 01:07:26 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:26.151443 | orchestrator | 2026-01-05 01:07:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:26.153767 | orchestrator | 2026-01-05 01:07:26 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:26.156015 | orchestrator | 2026-01-05 01:07:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:26.156080 | orchestrator | 2026-01-05 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:29.202813 | orchestrator | 2026-01-05 01:07:29 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:29.203284 | orchestrator | 2026-01-05 01:07:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:29.204656 | orchestrator | 2026-01-05 01:07:29 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:29.205363 | orchestrator | 2026-01-05 01:07:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:29.205406 | orchestrator | 2026-01-05 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:32.253436 | orchestrator | 2026-01-05 01:07:32 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:32.254172 | orchestrator | 2026-01-05 01:07:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:32.255006 | orchestrator | 2026-01-05 01:07:32 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:32.257186 | orchestrator | 2026-01-05 01:07:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:32.257268 | orchestrator | 2026-01-05 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:35.305333 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:35.306254 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:35.308136 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:35.309876 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:35.310050 | orchestrator | 2026-01-05 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:38.362667 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:38.364728 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:38.366590 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:38.367956 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:38.367990 | orchestrator | 2026-01-05 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:41.409653 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:41.411086 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:41.412458 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:41.413823 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:41.413863 | orchestrator | 2026-01-05 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:44.456284 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:44.457950 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:44.459727 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:44.461128 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:44.461201 | orchestrator | 2026-01-05 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:47.514311 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:47.517017 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:47.519649 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:47.521827 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:47.521920 | orchestrator | 2026-01-05 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:50.567583 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:50.569487 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:50.571825 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:50.573360 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:50.573404 | orchestrator | 2026-01-05 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:53.615279 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state STARTED 2026-01-05 01:07:53.617306 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:53.618975 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:53.621851 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:53.621968 | orchestrator | 2026-01-05 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:56.666877 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task e9a958a8-304a-4547-9f37-57b11bbf1689 is in state SUCCESS 2026-01-05 01:07:56.668737 | orchestrator | 2026-01-05 01:07:56.668797 | orchestrator | 2026-01-05 01:07:56.668805 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:07:56.668813 | orchestrator | 2026-01-05 01:07:56.668820 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:07:56.668827 | orchestrator | Monday 05 January 2026 01:06:01 +0000 (0:00:00.287) 0:00:00.287 ******** 2026-01-05 01:07:56.668834 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:07:56.668841 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:07:56.668848 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:07:56.668854 | orchestrator | 2026-01-05 01:07:56.668861 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:07:56.668867 | orchestrator | Monday 05 January 2026 01:06:01 +0000 (0:00:00.293) 0:00:00.581 ******** 2026-01-05 01:07:56.668873 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-05 01:07:56.668880 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-05 01:07:56.668887 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-05 01:07:56.668893 | orchestrator | 2026-01-05 01:07:56.668899 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-05 01:07:56.668905 | orchestrator | 2026-01-05 01:07:56.668912 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-05 01:07:56.668918 | orchestrator | Monday 05 January 2026 01:06:02 +0000 (0:00:00.481) 0:00:01.063 ******** 2026-01-05 01:07:56.668924 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:07:56.668932 | orchestrator | 2026-01-05 01:07:56.668938 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-05 01:07:56.668944 | orchestrator | Monday 05 January 2026 01:06:02 +0000 (0:00:00.525) 0:00:01.588 ******** 2026-01-05 01:07:56.668953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.668985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669014 | orchestrator | 2026-01-05 01:07:56.669020 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-05 01:07:56.669026 | orchestrator | Monday 05 January 2026 01:06:03 +0000 (0:00:00.700) 0:00:02.289 ******** 2026-01-05 01:07:56.669033 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:07:56.669040 | orchestrator | 2026-01-05 01:07:56.669046 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-05 01:07:56.669052 | orchestrator | Monday 05 January 2026 01:06:04 +0000 (0:00:00.843) 0:00:03.132 ******** 2026-01-05 01:07:56.669058 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:07:56.669065 | orchestrator | 2026-01-05 01:07:56.669071 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-05 01:07:56.669108 | orchestrator | Monday 05 January 2026 01:06:05 +0000 (0:00:00.737) 0:00:03.870 ******** 2026-01-05 01:07:56.669116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669142 | orchestrator | 2026-01-05 01:07:56.669149 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-05 01:07:56.669176 | orchestrator | Monday 05 January 2026 01:06:06 +0000 (0:00:01.533) 0:00:05.404 ******** 2026-01-05 01:07:56.669188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.669195 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:56.669202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.669208 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:56.669220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.669226 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:56.669233 | orchestrator | 2026-01-05 01:07:56.669239 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-05 01:07:56.669250 | orchestrator | Monday 05 January 2026 01:06:07 +0000 (0:00:00.482) 0:00:05.887 ******** 2026-01-05 01:07:56.669258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.669265 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:56.669273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.669281 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:56.669291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.669300 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:56.669307 | orchestrator | 2026-01-05 01:07:56.669315 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-05 01:07:56.669322 | orchestrator | Monday 05 January 2026 01:06:08 +0000 (0:00:00.967) 0:00:06.854 ******** 2026-01-05 01:07:56.669334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669363 | orchestrator | 2026-01-05 01:07:56.669370 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-05 01:07:56.669377 | orchestrator | Monday 05 January 2026 01:06:09 +0000 (0:00:01.362) 0:00:08.217 ******** 2026-01-05 01:07:56.669385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669412 | orchestrator | 2026-01-05 01:07:56.669419 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-05 01:07:56.669429 | orchestrator | Monday 05 January 2026 01:06:10 +0000 (0:00:01.440) 0:00:09.657 ******** 2026-01-05 01:07:56.669437 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:56.669449 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:56.669457 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:56.669464 | orchestrator | 2026-01-05 01:07:56.669472 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-05 01:07:56.669479 | orchestrator | Monday 05 January 2026 01:06:11 +0000 (0:00:00.496) 0:00:10.153 ******** 2026-01-05 01:07:56.669487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:07:56.669494 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:07:56.669502 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:07:56.669509 | orchestrator | 2026-01-05 01:07:56.669516 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-05 01:07:56.669524 | orchestrator | Monday 05 January 2026 01:06:12 +0000 (0:00:01.305) 0:00:11.459 ******** 2026-01-05 01:07:56.669532 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:07:56.669540 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:07:56.669547 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:07:56.669554 | orchestrator | 2026-01-05 01:07:56.669562 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-01-05 01:07:56.669569 | orchestrator | Monday 05 January 2026 01:06:14 +0000 (0:00:01.438) 0:00:12.897 ******** 2026-01-05 01:07:56.669577 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:07:56.669584 | orchestrator | 2026-01-05 01:07:56.669592 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-01-05 01:07:56.669599 | orchestrator | Monday 05 January 2026 01:06:14 +0000 (0:00:00.794) 0:00:13.691 ******** 2026-01-05 01:07:56.669610 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:07:56.669621 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:07:56.669631 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:07:56.669642 | orchestrator | 2026-01-05 01:07:56.669651 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-05 01:07:56.669659 | orchestrator | Monday 05 January 2026 01:06:15 +0000 (0:00:00.763) 0:00:14.455 ******** 2026-01-05 01:07:56.669668 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:56.669684 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:56.669695 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:56.669705 | orchestrator | 2026-01-05 01:07:56.669714 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-01-05 01:07:56.669724 | orchestrator | Monday 05 January 2026 01:06:17 +0000 (0:00:01.726) 0:00:16.181 ******** 2026-01-05 01:07:56.669734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-05 01:07:56.669834 | orchestrator | 2026-01-05 01:07:56.669845 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-01-05 01:07:56.669856 | orchestrator | Monday 05 January 2026 01:06:18 +0000 (0:00:01.061) 0:00:17.243 ******** 2026-01-05 01:07:56.669867 | orchestrator | changed: [testbed-node-0] => { 2026-01-05 01:07:56.669877 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:07:56.669887 | orchestrator | } 2026-01-05 01:07:56.669898 | orchestrator | changed: [testbed-node-1] => { 2026-01-05 01:07:56.669908 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:07:56.669919 | orchestrator | } 2026-01-05 01:07:56.669929 | orchestrator | changed: [testbed-node-2] => { 2026-01-05 01:07:56.669939 | orchestrator |  "msg": "Notifying handlers" 2026-01-05 01:07:56.669950 | orchestrator | } 2026-01-05 01:07:56.669960 | orchestrator | 2026-01-05 01:07:56.669970 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-05 01:07:56.669980 | orchestrator | Monday 05 January 2026 01:06:18 +0000 (0:00:00.358) 0:00:17.602 ******** 2026-01-05 01:07:56.669991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.670002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.670066 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:56.670082 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:56.670099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-05 01:07:56.670119 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:56.670130 | orchestrator | 2026-01-05 01:07:56.670141 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-05 01:07:56.670168 | orchestrator | Monday 05 January 2026 01:06:19 +0000 (0:00:00.842) 0:00:18.444 ******** 2026-01-05 01:07:56.670179 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:56.670190 | orchestrator | 2026-01-05 01:07:56.670200 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-05 01:07:56.670210 | orchestrator | Monday 05 January 2026 01:06:22 +0000 (0:00:02.374) 0:00:20.819 ******** 2026-01-05 01:07:56.670221 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:56.670231 | orchestrator | 2026-01-05 01:07:56.670241 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:07:56.670251 | orchestrator | Monday 05 January 2026 01:06:24 +0000 (0:00:02.390) 0:00:23.209 ******** 2026-01-05 01:07:56.670262 | orchestrator | 2026-01-05 01:07:56.670272 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:07:56.670282 | orchestrator | Monday 05 January 2026 01:06:24 +0000 (0:00:00.070) 0:00:23.280 ******** 2026-01-05 01:07:56.670292 | orchestrator | 2026-01-05 01:07:56.670301 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:07:56.670318 | orchestrator | Monday 05 January 2026 01:06:24 +0000 (0:00:00.068) 0:00:23.348 ******** 2026-01-05 01:07:56.670327 | orchestrator | 2026-01-05 01:07:56.670336 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-05 01:07:56.670347 | orchestrator | Monday 05 January 2026 01:06:24 +0000 (0:00:00.093) 0:00:23.442 ******** 2026-01-05 01:07:56.670356 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:56.670366 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:56.670376 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:56.670387 | orchestrator | 2026-01-05 01:07:56.670397 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-05 01:07:56.670408 | orchestrator | Monday 05 January 2026 01:06:26 +0000 (0:00:01.859) 0:00:25.301 ******** 2026-01-05 01:07:56.670418 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:56.670428 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:56.670438 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-05 01:07:56.670450 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-05 01:07:56.670461 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-05 01:07:56.670471 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-05 01:07:56.670482 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:07:56.670493 | orchestrator | 2026-01-05 01:07:56.670503 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-05 01:07:56.670514 | orchestrator | Monday 05 January 2026 01:07:17 +0000 (0:00:51.264) 0:01:16.566 ******** 2026-01-05 01:07:56.670524 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:56.670534 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:56.670544 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:56.670554 | orchestrator | 2026-01-05 01:07:56.670564 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-05 01:07:56.670586 | orchestrator | Monday 05 January 2026 01:07:49 +0000 (0:00:31.282) 0:01:47.849 ******** 2026-01-05 01:07:56.670596 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:07:56.670605 | orchestrator | 2026-01-05 01:07:56.670611 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-05 01:07:56.670617 | orchestrator | Monday 05 January 2026 01:07:51 +0000 (0:00:02.239) 0:01:50.089 ******** 2026-01-05 01:07:56.670624 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:56.670630 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:56.670636 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:56.670642 | orchestrator | 2026-01-05 01:07:56.670648 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-05 01:07:56.670654 | orchestrator | Monday 05 January 2026 01:07:51 +0000 (0:00:00.327) 0:01:50.416 ******** 2026-01-05 01:07:56.670662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-05 01:07:56.670673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-05 01:07:56.670681 | orchestrator | 2026-01-05 01:07:56.670687 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-05 01:07:56.670693 | orchestrator | Monday 05 January 2026 01:07:54 +0000 (0:00:02.449) 0:01:52.866 ******** 2026-01-05 01:07:56.670699 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:56.670706 | orchestrator | 2026-01-05 01:07:56.670712 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:07:56.670729 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:07:56.670737 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:07:56.670744 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:07:56.670750 | orchestrator | 2026-01-05 01:07:56.670756 | orchestrator | 2026-01-05 01:07:56.670762 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:07:56.670768 | orchestrator | Monday 05 January 2026 01:07:54 +0000 (0:00:00.243) 0:01:53.109 ******** 2026-01-05 01:07:56.670775 | orchestrator | =============================================================================== 2026-01-05 01:07:56.670781 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.26s 2026-01-05 01:07:56.670787 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.28s 2026-01-05 01:07:56.670793 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.45s 2026-01-05 01:07:56.670799 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.39s 2026-01-05 01:07:56.670805 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.37s 2026-01-05 01:07:56.670812 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.24s 2026-01-05 01:07:56.670823 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2026-01-05 01:07:56.670830 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.73s 2026-01-05 01:07:56.670836 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.53s 2026-01-05 01:07:56.670842 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.44s 2026-01-05 01:07:56.670854 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.44s 2026-01-05 01:07:56.670861 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.36s 2026-01-05 01:07:56.670867 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.31s 2026-01-05 01:07:56.670873 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.06s 2026-01-05 01:07:56.670879 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.97s 2026-01-05 01:07:56.670885 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.84s 2026-01-05 01:07:56.670891 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.84s 2026-01-05 01:07:56.670898 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.79s 2026-01-05 01:07:56.670904 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.76s 2026-01-05 01:07:56.670910 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2026-01-05 01:07:56.670917 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:56.673115 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:56.675836 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:56.675894 | orchestrator | 2026-01-05 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:59.723110 | orchestrator | 2026-01-05 01:07:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:07:59.724655 | orchestrator | 2026-01-05 01:07:59 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:07:59.726937 | orchestrator | 2026-01-05 01:07:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:07:59.727001 | orchestrator | 2026-01-05 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:02.771426 | orchestrator | 2026-01-05 01:08:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:02.773745 | orchestrator | 2026-01-05 01:08:02 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:02.775930 | orchestrator | 2026-01-05 01:08:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:02.775999 | orchestrator | 2026-01-05 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:05.822735 | orchestrator | 2026-01-05 01:08:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:05.824666 | orchestrator | 2026-01-05 01:08:05 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:05.827109 | orchestrator | 2026-01-05 01:08:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:05.827202 | orchestrator | 2026-01-05 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:08.872212 | orchestrator | 2026-01-05 01:08:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:08.873864 | orchestrator | 2026-01-05 01:08:08 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:08.874405 | orchestrator | 2026-01-05 01:08:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:08.874442 | orchestrator | 2026-01-05 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:11.920979 | orchestrator | 2026-01-05 01:08:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:11.922918 | orchestrator | 2026-01-05 01:08:11 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:11.925299 | orchestrator | 2026-01-05 01:08:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:11.925386 | orchestrator | 2026-01-05 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:14.972304 | orchestrator | 2026-01-05 01:08:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:14.973488 | orchestrator | 2026-01-05 01:08:14 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:14.974934 | orchestrator | 2026-01-05 01:08:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:14.975110 | orchestrator | 2026-01-05 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:18.020835 | orchestrator | 2026-01-05 01:08:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:18.022985 | orchestrator | 2026-01-05 01:08:18 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:18.030322 | orchestrator | 2026-01-05 01:08:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:18.030419 | orchestrator | 2026-01-05 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:21.075227 | orchestrator | 2026-01-05 01:08:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:21.077726 | orchestrator | 2026-01-05 01:08:21 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:21.079669 | orchestrator | 2026-01-05 01:08:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:21.079724 | orchestrator | 2026-01-05 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:24.116960 | orchestrator | 2026-01-05 01:08:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:24.118074 | orchestrator | 2026-01-05 01:08:24 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:24.119362 | orchestrator | 2026-01-05 01:08:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:24.119393 | orchestrator | 2026-01-05 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:27.174765 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:27.176893 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:27.179685 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:27.179763 | orchestrator | 2026-01-05 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:30.221311 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:30.221412 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:30.223811 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:30.223888 | orchestrator | 2026-01-05 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:33.275771 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:33.277695 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:33.279215 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:33.279292 | orchestrator | 2026-01-05 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:36.352661 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:36.354347 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:36.356572 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:36.356622 | orchestrator | 2026-01-05 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:39.432057 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:39.433533 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:39.435739 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:39.435794 | orchestrator | 2026-01-05 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:42.482719 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:42.485274 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:42.486968 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:42.487005 | orchestrator | 2026-01-05 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:45.538977 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:45.539925 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:45.541304 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:45.541343 | orchestrator | 2026-01-05 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:48.594424 | orchestrator | 2026-01-05 01:08:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:48.596653 | orchestrator | 2026-01-05 01:08:48 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:48.598141 | orchestrator | 2026-01-05 01:08:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:48.598208 | orchestrator | 2026-01-05 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:51.643271 | orchestrator | 2026-01-05 01:08:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:51.644819 | orchestrator | 2026-01-05 01:08:51 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:51.646513 | orchestrator | 2026-01-05 01:08:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:51.646567 | orchestrator | 2026-01-05 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:54.689716 | orchestrator | 2026-01-05 01:08:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:54.690225 | orchestrator | 2026-01-05 01:08:54 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:54.691410 | orchestrator | 2026-01-05 01:08:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:54.691475 | orchestrator | 2026-01-05 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:57.742397 | orchestrator | 2026-01-05 01:08:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:08:57.745168 | orchestrator | 2026-01-05 01:08:57 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:08:57.750216 | orchestrator | 2026-01-05 01:08:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:08:57.750467 | orchestrator | 2026-01-05 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:00.797944 | orchestrator | 2026-01-05 01:09:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:00.800238 | orchestrator | 2026-01-05 01:09:00 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:00.802858 | orchestrator | 2026-01-05 01:09:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:00.802958 | orchestrator | 2026-01-05 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:03.857821 | orchestrator | 2026-01-05 01:09:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:03.860498 | orchestrator | 2026-01-05 01:09:03 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:03.863263 | orchestrator | 2026-01-05 01:09:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:03.863453 | orchestrator | 2026-01-05 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:06.909340 | orchestrator | 2026-01-05 01:09:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:06.912000 | orchestrator | 2026-01-05 01:09:06 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:06.914207 | orchestrator | 2026-01-05 01:09:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:06.914242 | orchestrator | 2026-01-05 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:09.962772 | orchestrator | 2026-01-05 01:09:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:09.965760 | orchestrator | 2026-01-05 01:09:09 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:09.967620 | orchestrator | 2026-01-05 01:09:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:09.967669 | orchestrator | 2026-01-05 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:13.013651 | orchestrator | 2026-01-05 01:09:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:13.016598 | orchestrator | 2026-01-05 01:09:13 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:13.018922 | orchestrator | 2026-01-05 01:09:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:13.019806 | orchestrator | 2026-01-05 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:16.068470 | orchestrator | 2026-01-05 01:09:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:16.070327 | orchestrator | 2026-01-05 01:09:16 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:16.072009 | orchestrator | 2026-01-05 01:09:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:16.072117 | orchestrator | 2026-01-05 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:19.117469 | orchestrator | 2026-01-05 01:09:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:19.119184 | orchestrator | 2026-01-05 01:09:19 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:19.121852 | orchestrator | 2026-01-05 01:09:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:19.121904 | orchestrator | 2026-01-05 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:22.173399 | orchestrator | 2026-01-05 01:09:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:22.175055 | orchestrator | 2026-01-05 01:09:22 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:22.176930 | orchestrator | 2026-01-05 01:09:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:22.177117 | orchestrator | 2026-01-05 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:25.231088 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:25.233255 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:25.235366 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:25.235474 | orchestrator | 2026-01-05 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:28.287262 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:28.288684 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:28.291135 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:28.291253 | orchestrator | 2026-01-05 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:31.337154 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:31.338087 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:31.340344 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:31.340508 | orchestrator | 2026-01-05 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:34.388217 | orchestrator | 2026-01-05 01:09:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:34.391004 | orchestrator | 2026-01-05 01:09:34 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:34.392900 | orchestrator | 2026-01-05 01:09:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:34.393105 | orchestrator | 2026-01-05 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:37.439693 | orchestrator | 2026-01-05 01:09:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:37.441140 | orchestrator | 2026-01-05 01:09:37 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:37.443171 | orchestrator | 2026-01-05 01:09:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:37.443212 | orchestrator | 2026-01-05 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:40.492360 | orchestrator | 2026-01-05 01:09:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:40.494280 | orchestrator | 2026-01-05 01:09:40 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:40.497359 | orchestrator | 2026-01-05 01:09:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:40.497454 | orchestrator | 2026-01-05 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:43.555718 | orchestrator | 2026-01-05 01:09:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:43.557393 | orchestrator | 2026-01-05 01:09:43 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:43.560144 | orchestrator | 2026-01-05 01:09:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:43.560214 | orchestrator | 2026-01-05 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:46.598879 | orchestrator | 2026-01-05 01:09:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:46.600470 | orchestrator | 2026-01-05 01:09:46 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:46.602558 | orchestrator | 2026-01-05 01:09:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:46.602602 | orchestrator | 2026-01-05 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:49.653254 | orchestrator | 2026-01-05 01:09:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:49.654747 | orchestrator | 2026-01-05 01:09:49 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:49.656743 | orchestrator | 2026-01-05 01:09:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:49.656780 | orchestrator | 2026-01-05 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:52.702382 | orchestrator | 2026-01-05 01:09:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:52.706825 | orchestrator | 2026-01-05 01:09:52 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:52.712900 | orchestrator | 2026-01-05 01:09:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:52.713794 | orchestrator | 2026-01-05 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:55.761305 | orchestrator | 2026-01-05 01:09:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:55.762131 | orchestrator | 2026-01-05 01:09:55 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state STARTED 2026-01-05 01:09:55.763328 | orchestrator | 2026-01-05 01:09:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:55.763387 | orchestrator | 2026-01-05 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:58.813485 | orchestrator | 2026-01-05 01:09:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:09:58.813580 | orchestrator | 2026-01-05 01:09:58 | INFO  | Task ab80a91a-ab72-4899-b956-ba859a2f4d1d is in state SUCCESS 2026-01-05 01:09:58.815088 | orchestrator | 2026-01-05 01:09:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:09:58.815120 | orchestrator | 2026-01-05 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:01.869690 | orchestrator | 2026-01-05 01:10:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:01.870625 | orchestrator | 2026-01-05 01:10:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:01.870644 | orchestrator | 2026-01-05 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:04.922436 | orchestrator | 2026-01-05 01:10:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:04.923789 | orchestrator | 2026-01-05 01:10:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:04.923821 | orchestrator | 2026-01-05 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:07.979519 | orchestrator | 2026-01-05 01:10:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:07.980932 | orchestrator | 2026-01-05 01:10:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:07.981067 | orchestrator | 2026-01-05 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:11.038639 | orchestrator | 2026-01-05 01:10:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:11.041430 | orchestrator | 2026-01-05 01:10:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:11.041642 | orchestrator | 2026-01-05 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:14.090488 | orchestrator | 2026-01-05 01:10:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:14.092298 | orchestrator | 2026-01-05 01:10:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:14.092346 | orchestrator | 2026-01-05 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:17.144370 | orchestrator | 2026-01-05 01:10:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:17.147868 | orchestrator | 2026-01-05 01:10:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:17.147922 | orchestrator | 2026-01-05 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:20.196288 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:20.197249 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:20.197301 | orchestrator | 2026-01-05 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:23.255075 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:23.257441 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:23.257496 | orchestrator | 2026-01-05 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:26.313547 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:26.316036 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:26.316127 | orchestrator | 2026-01-05 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:29.368329 | orchestrator | 2026-01-05 01:10:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:29.369593 | orchestrator | 2026-01-05 01:10:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:29.369646 | orchestrator | 2026-01-05 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:32.422176 | orchestrator | 2026-01-05 01:10:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:32.424009 | orchestrator | 2026-01-05 01:10:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:32.424087 | orchestrator | 2026-01-05 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:35.471472 | orchestrator | 2026-01-05 01:10:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:35.473691 | orchestrator | 2026-01-05 01:10:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:35.474115 | orchestrator | 2026-01-05 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:38.523739 | orchestrator | 2026-01-05 01:10:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:38.525224 | orchestrator | 2026-01-05 01:10:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:38.525308 | orchestrator | 2026-01-05 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:41.572032 | orchestrator | 2026-01-05 01:10:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:41.575641 | orchestrator | 2026-01-05 01:10:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:41.575835 | orchestrator | 2026-01-05 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:44.619555 | orchestrator | 2026-01-05 01:10:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:44.620753 | orchestrator | 2026-01-05 01:10:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:44.620828 | orchestrator | 2026-01-05 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:47.668360 | orchestrator | 2026-01-05 01:10:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:47.669638 | orchestrator | 2026-01-05 01:10:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:47.669761 | orchestrator | 2026-01-05 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:50.715781 | orchestrator | 2026-01-05 01:10:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:50.716300 | orchestrator | 2026-01-05 01:10:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:50.716322 | orchestrator | 2026-01-05 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:53.762415 | orchestrator | 2026-01-05 01:10:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:53.762514 | orchestrator | 2026-01-05 01:10:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:53.762526 | orchestrator | 2026-01-05 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:56.804286 | orchestrator | 2026-01-05 01:10:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:56.805575 | orchestrator | 2026-01-05 01:10:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:56.805713 | orchestrator | 2026-01-05 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:59.850350 | orchestrator | 2026-01-05 01:10:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:10:59.852420 | orchestrator | 2026-01-05 01:10:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:10:59.852521 | orchestrator | 2026-01-05 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:02.895555 | orchestrator | 2026-01-05 01:11:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:02.897226 | orchestrator | 2026-01-05 01:11:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:02.897300 | orchestrator | 2026-01-05 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:05.947227 | orchestrator | 2026-01-05 01:11:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:05.949996 | orchestrator | 2026-01-05 01:11:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:05.950120 | orchestrator | 2026-01-05 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:09.021385 | orchestrator | 2026-01-05 01:11:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:09.023542 | orchestrator | 2026-01-05 01:11:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:09.023639 | orchestrator | 2026-01-05 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:12.064594 | orchestrator | 2026-01-05 01:11:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:12.065374 | orchestrator | 2026-01-05 01:11:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:12.065402 | orchestrator | 2026-01-05 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:15.108726 | orchestrator | 2026-01-05 01:11:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:15.110470 | orchestrator | 2026-01-05 01:11:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:15.110591 | orchestrator | 2026-01-05 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:18.157538 | orchestrator | 2026-01-05 01:11:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:18.158336 | orchestrator | 2026-01-05 01:11:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:18.158372 | orchestrator | 2026-01-05 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:21.212497 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:21.213137 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:21.213191 | orchestrator | 2026-01-05 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:24.255722 | orchestrator | 2026-01-05 01:11:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:24.258705 | orchestrator | 2026-01-05 01:11:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:24.258780 | orchestrator | 2026-01-05 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:27.307148 | orchestrator | 2026-01-05 01:11:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:27.308966 | orchestrator | 2026-01-05 01:11:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:27.309033 | orchestrator | 2026-01-05 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:30.354913 | orchestrator | 2026-01-05 01:11:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:30.357367 | orchestrator | 2026-01-05 01:11:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:30.357435 | orchestrator | 2026-01-05 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:33.408435 | orchestrator | 2026-01-05 01:11:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:33.410007 | orchestrator | 2026-01-05 01:11:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:33.410114 | orchestrator | 2026-01-05 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:36.466748 | orchestrator | 2026-01-05 01:11:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:36.468135 | orchestrator | 2026-01-05 01:11:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:36.468230 | orchestrator | 2026-01-05 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:39.519101 | orchestrator | 2026-01-05 01:11:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:39.520842 | orchestrator | 2026-01-05 01:11:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:39.520931 | orchestrator | 2026-01-05 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:42.569935 | orchestrator | 2026-01-05 01:11:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:42.572165 | orchestrator | 2026-01-05 01:11:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:42.572215 | orchestrator | 2026-01-05 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:45.611379 | orchestrator | 2026-01-05 01:11:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:45.612972 | orchestrator | 2026-01-05 01:11:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:45.613033 | orchestrator | 2026-01-05 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:48.659336 | orchestrator | 2026-01-05 01:11:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:48.661296 | orchestrator | 2026-01-05 01:11:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:48.661377 | orchestrator | 2026-01-05 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:51.709640 | orchestrator | 2026-01-05 01:11:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:51.710685 | orchestrator | 2026-01-05 01:11:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:51.710743 | orchestrator | 2026-01-05 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:54.753800 | orchestrator | 2026-01-05 01:11:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:54.755298 | orchestrator | 2026-01-05 01:11:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:54.755374 | orchestrator | 2026-01-05 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:57.811067 | orchestrator | 2026-01-05 01:11:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:11:57.813779 | orchestrator | 2026-01-05 01:11:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:11:57.813914 | orchestrator | 2026-01-05 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:00.866172 | orchestrator | 2026-01-05 01:12:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:00.867405 | orchestrator | 2026-01-05 01:12:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:00.867459 | orchestrator | 2026-01-05 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:03.918385 | orchestrator | 2026-01-05 01:12:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:03.919131 | orchestrator | 2026-01-05 01:12:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:03.919161 | orchestrator | 2026-01-05 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:06.970255 | orchestrator | 2026-01-05 01:12:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:06.972565 | orchestrator | 2026-01-05 01:12:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:06.972622 | orchestrator | 2026-01-05 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:10.039769 | orchestrator | 2026-01-05 01:12:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:10.042664 | orchestrator | 2026-01-05 01:12:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:10.042770 | orchestrator | 2026-01-05 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:13.086875 | orchestrator | 2026-01-05 01:12:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:13.091148 | orchestrator | 2026-01-05 01:12:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:13.091240 | orchestrator | 2026-01-05 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:16.141362 | orchestrator | 2026-01-05 01:12:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:16.143371 | orchestrator | 2026-01-05 01:12:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:16.143485 | orchestrator | 2026-01-05 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:19.193141 | orchestrator | 2026-01-05 01:12:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:19.194037 | orchestrator | 2026-01-05 01:12:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:19.194059 | orchestrator | 2026-01-05 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:22.246772 | orchestrator | 2026-01-05 01:12:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:22.248554 | orchestrator | 2026-01-05 01:12:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:22.248626 | orchestrator | 2026-01-05 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:25.291319 | orchestrator | 2026-01-05 01:12:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:25.293202 | orchestrator | 2026-01-05 01:12:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:25.293974 | orchestrator | 2026-01-05 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:28.341206 | orchestrator | 2026-01-05 01:12:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:28.344172 | orchestrator | 2026-01-05 01:12:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:28.344291 | orchestrator | 2026-01-05 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:31.387943 | orchestrator | 2026-01-05 01:12:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:31.389122 | orchestrator | 2026-01-05 01:12:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:31.389158 | orchestrator | 2026-01-05 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:34.434539 | orchestrator | 2026-01-05 01:12:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:34.436172 | orchestrator | 2026-01-05 01:12:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:34.436225 | orchestrator | 2026-01-05 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:37.487698 | orchestrator | 2026-01-05 01:12:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:37.489211 | orchestrator | 2026-01-05 01:12:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:37.489245 | orchestrator | 2026-01-05 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:40.536357 | orchestrator | 2026-01-05 01:12:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:40.536499 | orchestrator | 2026-01-05 01:12:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:40.536824 | orchestrator | 2026-01-05 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:43.584417 | orchestrator | 2026-01-05 01:12:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:43.586330 | orchestrator | 2026-01-05 01:12:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:43.586377 | orchestrator | 2026-01-05 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:46.633260 | orchestrator | 2026-01-05 01:12:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:46.635984 | orchestrator | 2026-01-05 01:12:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:46.636133 | orchestrator | 2026-01-05 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:49.684156 | orchestrator | 2026-01-05 01:12:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:49.685843 | orchestrator | 2026-01-05 01:12:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:49.685898 | orchestrator | 2026-01-05 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:52.729187 | orchestrator | 2026-01-05 01:12:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:52.730877 | orchestrator | 2026-01-05 01:12:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:52.730941 | orchestrator | 2026-01-05 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:55.768750 | orchestrator | 2026-01-05 01:12:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:55.769906 | orchestrator | 2026-01-05 01:12:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:55.770099 | orchestrator | 2026-01-05 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:58.820270 | orchestrator | 2026-01-05 01:12:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:12:58.823568 | orchestrator | 2026-01-05 01:12:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:12:58.823665 | orchestrator | 2026-01-05 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:01.871899 | orchestrator | 2026-01-05 01:13:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:01.873813 | orchestrator | 2026-01-05 01:13:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:01.873891 | orchestrator | 2026-01-05 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:04.919116 | orchestrator | 2026-01-05 01:13:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:04.920729 | orchestrator | 2026-01-05 01:13:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:04.920773 | orchestrator | 2026-01-05 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:07.966123 | orchestrator | 2026-01-05 01:13:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:07.966912 | orchestrator | 2026-01-05 01:13:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:07.966939 | orchestrator | 2026-01-05 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:11.020673 | orchestrator | 2026-01-05 01:13:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:11.024250 | orchestrator | 2026-01-05 01:13:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:11.024337 | orchestrator | 2026-01-05 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:14.068867 | orchestrator | 2026-01-05 01:13:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:14.069858 | orchestrator | 2026-01-05 01:13:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:14.069897 | orchestrator | 2026-01-05 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:17.113582 | orchestrator | 2026-01-05 01:13:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:17.115195 | orchestrator | 2026-01-05 01:13:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:17.115232 | orchestrator | 2026-01-05 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:20.154813 | orchestrator | 2026-01-05 01:13:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:20.155145 | orchestrator | 2026-01-05 01:13:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:20.155180 | orchestrator | 2026-01-05 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:23.207986 | orchestrator | 2026-01-05 01:13:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:23.210604 | orchestrator | 2026-01-05 01:13:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:23.210669 | orchestrator | 2026-01-05 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:26.262508 | orchestrator | 2026-01-05 01:13:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:26.265757 | orchestrator | 2026-01-05 01:13:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:26.265820 | orchestrator | 2026-01-05 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:29.318801 | orchestrator | 2026-01-05 01:13:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:29.320177 | orchestrator | 2026-01-05 01:13:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:29.320247 | orchestrator | 2026-01-05 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:32.372210 | orchestrator | 2026-01-05 01:13:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:32.373886 | orchestrator | 2026-01-05 01:13:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:32.373964 | orchestrator | 2026-01-05 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:35.425669 | orchestrator | 2026-01-05 01:13:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:35.427850 | orchestrator | 2026-01-05 01:13:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:35.427945 | orchestrator | 2026-01-05 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:38.471072 | orchestrator | 2026-01-05 01:13:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:38.472930 | orchestrator | 2026-01-05 01:13:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:38.473015 | orchestrator | 2026-01-05 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:41.514219 | orchestrator | 2026-01-05 01:13:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:41.515853 | orchestrator | 2026-01-05 01:13:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:41.515966 | orchestrator | 2026-01-05 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:44.566326 | orchestrator | 2026-01-05 01:13:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:44.568076 | orchestrator | 2026-01-05 01:13:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:44.568310 | orchestrator | 2026-01-05 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:47.611809 | orchestrator | 2026-01-05 01:13:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:47.613655 | orchestrator | 2026-01-05 01:13:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:47.613746 | orchestrator | 2026-01-05 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:50.659155 | orchestrator | 2026-01-05 01:13:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:50.661647 | orchestrator | 2026-01-05 01:13:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:50.661722 | orchestrator | 2026-01-05 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:53.708316 | orchestrator | 2026-01-05 01:13:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:53.709819 | orchestrator | 2026-01-05 01:13:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:53.709885 | orchestrator | 2026-01-05 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:56.765582 | orchestrator | 2026-01-05 01:13:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:56.769767 | orchestrator | 2026-01-05 01:13:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:56.769909 | orchestrator | 2026-01-05 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:59.822275 | orchestrator | 2026-01-05 01:13:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:13:59.825723 | orchestrator | 2026-01-05 01:13:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:13:59.825843 | orchestrator | 2026-01-05 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:02.880558 | orchestrator | 2026-01-05 01:14:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:02.883204 | orchestrator | 2026-01-05 01:14:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:02.883536 | orchestrator | 2026-01-05 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:05.927530 | orchestrator | 2026-01-05 01:14:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:05.929040 | orchestrator | 2026-01-05 01:14:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:05.929099 | orchestrator | 2026-01-05 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:08.983546 | orchestrator | 2026-01-05 01:14:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:08.985604 | orchestrator | 2026-01-05 01:14:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:08.985755 | orchestrator | 2026-01-05 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:12.043429 | orchestrator | 2026-01-05 01:14:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:12.044325 | orchestrator | 2026-01-05 01:14:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:12.044362 | orchestrator | 2026-01-05 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:15.090376 | orchestrator | 2026-01-05 01:14:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:15.093176 | orchestrator | 2026-01-05 01:14:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:15.093273 | orchestrator | 2026-01-05 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:18.144709 | orchestrator | 2026-01-05 01:14:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:18.146385 | orchestrator | 2026-01-05 01:14:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:18.146532 | orchestrator | 2026-01-05 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:21.186350 | orchestrator | 2026-01-05 01:14:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:21.187237 | orchestrator | 2026-01-05 01:14:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:21.187334 | orchestrator | 2026-01-05 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:24.242802 | orchestrator | 2026-01-05 01:14:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:24.244956 | orchestrator | 2026-01-05 01:14:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:24.245248 | orchestrator | 2026-01-05 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:27.293802 | orchestrator | 2026-01-05 01:14:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:27.295163 | orchestrator | 2026-01-05 01:14:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:27.295248 | orchestrator | 2026-01-05 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:30.342085 | orchestrator | 2026-01-05 01:14:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:30.343401 | orchestrator | 2026-01-05 01:14:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:30.343451 | orchestrator | 2026-01-05 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:33.397162 | orchestrator | 2026-01-05 01:14:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:33.397914 | orchestrator | 2026-01-05 01:14:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:33.397950 | orchestrator | 2026-01-05 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:36.455694 | orchestrator | 2026-01-05 01:14:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:36.455918 | orchestrator | 2026-01-05 01:14:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:36.455975 | orchestrator | 2026-01-05 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:39.508377 | orchestrator | 2026-01-05 01:14:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:39.509124 | orchestrator | 2026-01-05 01:14:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:39.509163 | orchestrator | 2026-01-05 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:42.565170 | orchestrator | 2026-01-05 01:14:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:42.568125 | orchestrator | 2026-01-05 01:14:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:42.568205 | orchestrator | 2026-01-05 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:45.610597 | orchestrator | 2026-01-05 01:14:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:45.611829 | orchestrator | 2026-01-05 01:14:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:45.611880 | orchestrator | 2026-01-05 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:48.661088 | orchestrator | 2026-01-05 01:14:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:48.663409 | orchestrator | 2026-01-05 01:14:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:48.663502 | orchestrator | 2026-01-05 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:51.715761 | orchestrator | 2026-01-05 01:14:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:51.717696 | orchestrator | 2026-01-05 01:14:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:51.717756 | orchestrator | 2026-01-05 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:54.770730 | orchestrator | 2026-01-05 01:14:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:54.773939 | orchestrator | 2026-01-05 01:14:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:54.774081 | orchestrator | 2026-01-05 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:57.822857 | orchestrator | 2026-01-05 01:14:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:14:57.824144 | orchestrator | 2026-01-05 01:14:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:14:57.824181 | orchestrator | 2026-01-05 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:00.866443 | orchestrator | 2026-01-05 01:15:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:00.868350 | orchestrator | 2026-01-05 01:15:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:00.868429 | orchestrator | 2026-01-05 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:03.913552 | orchestrator | 2026-01-05 01:15:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:03.914951 | orchestrator | 2026-01-05 01:15:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:03.914994 | orchestrator | 2026-01-05 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:06.954541 | orchestrator | 2026-01-05 01:15:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:06.957060 | orchestrator | 2026-01-05 01:15:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:06.957136 | orchestrator | 2026-01-05 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:10.011055 | orchestrator | 2026-01-05 01:15:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:10.013221 | orchestrator | 2026-01-05 01:15:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:10.013265 | orchestrator | 2026-01-05 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:13.068972 | orchestrator | 2026-01-05 01:15:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:13.069858 | orchestrator | 2026-01-05 01:15:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:13.069899 | orchestrator | 2026-01-05 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:16.117426 | orchestrator | 2026-01-05 01:15:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:16.119061 | orchestrator | 2026-01-05 01:15:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:16.119305 | orchestrator | 2026-01-05 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:19.162938 | orchestrator | 2026-01-05 01:15:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:19.163432 | orchestrator | 2026-01-05 01:15:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:19.163462 | orchestrator | 2026-01-05 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:22.214287 | orchestrator | 2026-01-05 01:15:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:22.216027 | orchestrator | 2026-01-05 01:15:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:22.216060 | orchestrator | 2026-01-05 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:25.258768 | orchestrator | 2026-01-05 01:15:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:25.259929 | orchestrator | 2026-01-05 01:15:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:25.259996 | orchestrator | 2026-01-05 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:28.311987 | orchestrator | 2026-01-05 01:15:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:28.313526 | orchestrator | 2026-01-05 01:15:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:28.313697 | orchestrator | 2026-01-05 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:31.358925 | orchestrator | 2026-01-05 01:15:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:31.360354 | orchestrator | 2026-01-05 01:15:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:31.360416 | orchestrator | 2026-01-05 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:34.404538 | orchestrator | 2026-01-05 01:15:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:34.406394 | orchestrator | 2026-01-05 01:15:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:34.406447 | orchestrator | 2026-01-05 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:37.458158 | orchestrator | 2026-01-05 01:15:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:37.461182 | orchestrator | 2026-01-05 01:15:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:37.461249 | orchestrator | 2026-01-05 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:40.505417 | orchestrator | 2026-01-05 01:15:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:40.507441 | orchestrator | 2026-01-05 01:15:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:40.507517 | orchestrator | 2026-01-05 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:43.560771 | orchestrator | 2026-01-05 01:15:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:43.562234 | orchestrator | 2026-01-05 01:15:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:43.562277 | orchestrator | 2026-01-05 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:46.607206 | orchestrator | 2026-01-05 01:15:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:46.608346 | orchestrator | 2026-01-05 01:15:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:46.608442 | orchestrator | 2026-01-05 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:49.652100 | orchestrator | 2026-01-05 01:15:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:49.654178 | orchestrator | 2026-01-05 01:15:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:49.654923 | orchestrator | 2026-01-05 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:52.705969 | orchestrator | 2026-01-05 01:15:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:52.707855 | orchestrator | 2026-01-05 01:15:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:52.707944 | orchestrator | 2026-01-05 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:55.753613 | orchestrator | 2026-01-05 01:15:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:55.755120 | orchestrator | 2026-01-05 01:15:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:55.755223 | orchestrator | 2026-01-05 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:58.800288 | orchestrator | 2026-01-05 01:15:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:15:58.802685 | orchestrator | 2026-01-05 01:15:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:15:58.802805 | orchestrator | 2026-01-05 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:01.852927 | orchestrator | 2026-01-05 01:16:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:01.854870 | orchestrator | 2026-01-05 01:16:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:01.854952 | orchestrator | 2026-01-05 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:04.908103 | orchestrator | 2026-01-05 01:16:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:04.909593 | orchestrator | 2026-01-05 01:16:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:04.909725 | orchestrator | 2026-01-05 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:07.957297 | orchestrator | 2026-01-05 01:16:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:07.960941 | orchestrator | 2026-01-05 01:16:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:07.961036 | orchestrator | 2026-01-05 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:11.006171 | orchestrator | 2026-01-05 01:16:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:11.008868 | orchestrator | 2026-01-05 01:16:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:11.008941 | orchestrator | 2026-01-05 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:14.056375 | orchestrator | 2026-01-05 01:16:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:14.058730 | orchestrator | 2026-01-05 01:16:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:14.058804 | orchestrator | 2026-01-05 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:17.106663 | orchestrator | 2026-01-05 01:16:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:17.108167 | orchestrator | 2026-01-05 01:16:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:17.108217 | orchestrator | 2026-01-05 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:20.156390 | orchestrator | 2026-01-05 01:16:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:20.156801 | orchestrator | 2026-01-05 01:16:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:20.156843 | orchestrator | 2026-01-05 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:23.204957 | orchestrator | 2026-01-05 01:16:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:23.206873 | orchestrator | 2026-01-05 01:16:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:23.206978 | orchestrator | 2026-01-05 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:26.261113 | orchestrator | 2026-01-05 01:16:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:26.264173 | orchestrator | 2026-01-05 01:16:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:26.264449 | orchestrator | 2026-01-05 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:29.320262 | orchestrator | 2026-01-05 01:16:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:29.322645 | orchestrator | 2026-01-05 01:16:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:29.322705 | orchestrator | 2026-01-05 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:32.378957 | orchestrator | 2026-01-05 01:16:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:32.379346 | orchestrator | 2026-01-05 01:16:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:32.379365 | orchestrator | 2026-01-05 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:35.430260 | orchestrator | 2026-01-05 01:16:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:35.431634 | orchestrator | 2026-01-05 01:16:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:35.431725 | orchestrator | 2026-01-05 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:38.476405 | orchestrator | 2026-01-05 01:16:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:38.477976 | orchestrator | 2026-01-05 01:16:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:38.478077 | orchestrator | 2026-01-05 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:41.523877 | orchestrator | 2026-01-05 01:16:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:41.524270 | orchestrator | 2026-01-05 01:16:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:41.524310 | orchestrator | 2026-01-05 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:44.571844 | orchestrator | 2026-01-05 01:16:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:44.573634 | orchestrator | 2026-01-05 01:16:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:44.573719 | orchestrator | 2026-01-05 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:47.624596 | orchestrator | 2026-01-05 01:16:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:47.627241 | orchestrator | 2026-01-05 01:16:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:47.627336 | orchestrator | 2026-01-05 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:50.681491 | orchestrator | 2026-01-05 01:16:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:50.683763 | orchestrator | 2026-01-05 01:16:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:50.683804 | orchestrator | 2026-01-05 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:53.731340 | orchestrator | 2026-01-05 01:16:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:53.734116 | orchestrator | 2026-01-05 01:16:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:53.734244 | orchestrator | 2026-01-05 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:56.789955 | orchestrator | 2026-01-05 01:16:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:56.791635 | orchestrator | 2026-01-05 01:16:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:56.791669 | orchestrator | 2026-01-05 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:59.839970 | orchestrator | 2026-01-05 01:16:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:16:59.842524 | orchestrator | 2026-01-05 01:16:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:16:59.842670 | orchestrator | 2026-01-05 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:02.896133 | orchestrator | 2026-01-05 01:17:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:02.897883 | orchestrator | 2026-01-05 01:17:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:02.897934 | orchestrator | 2026-01-05 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:05.944907 | orchestrator | 2026-01-05 01:17:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:05.946912 | orchestrator | 2026-01-05 01:17:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:05.947421 | orchestrator | 2026-01-05 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:08.996307 | orchestrator | 2026-01-05 01:17:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:09.001999 | orchestrator | 2026-01-05 01:17:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:09.002213 | orchestrator | 2026-01-05 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:12.048270 | orchestrator | 2026-01-05 01:17:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:12.050754 | orchestrator | 2026-01-05 01:17:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:12.050817 | orchestrator | 2026-01-05 01:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:15.100643 | orchestrator | 2026-01-05 01:17:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:15.102532 | orchestrator | 2026-01-05 01:17:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:15.102626 | orchestrator | 2026-01-05 01:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:18.160632 | orchestrator | 2026-01-05 01:17:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:18.161933 | orchestrator | 2026-01-05 01:17:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:18.162001 | orchestrator | 2026-01-05 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:21.218864 | orchestrator | 2026-01-05 01:17:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:21.220208 | orchestrator | 2026-01-05 01:17:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:21.220527 | orchestrator | 2026-01-05 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:24.277818 | orchestrator | 2026-01-05 01:17:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:24.280223 | orchestrator | 2026-01-05 01:17:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:24.280830 | orchestrator | 2026-01-05 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:27.339945 | orchestrator | 2026-01-05 01:17:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:27.343161 | orchestrator | 2026-01-05 01:17:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:27.343283 | orchestrator | 2026-01-05 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:30.389531 | orchestrator | 2026-01-05 01:17:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:30.392022 | orchestrator | 2026-01-05 01:17:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:30.392090 | orchestrator | 2026-01-05 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:33.455306 | orchestrator | 2026-01-05 01:17:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:33.456662 | orchestrator | 2026-01-05 01:17:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:33.456724 | orchestrator | 2026-01-05 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:36.503762 | orchestrator | 2026-01-05 01:17:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:36.504819 | orchestrator | 2026-01-05 01:17:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:36.504848 | orchestrator | 2026-01-05 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:39.552730 | orchestrator | 2026-01-05 01:17:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:39.554822 | orchestrator | 2026-01-05 01:17:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:39.554892 | orchestrator | 2026-01-05 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:42.603851 | orchestrator | 2026-01-05 01:17:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:42.606395 | orchestrator | 2026-01-05 01:17:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:42.606494 | orchestrator | 2026-01-05 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:45.664377 | orchestrator | 2026-01-05 01:17:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:45.666693 | orchestrator | 2026-01-05 01:17:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:45.666765 | orchestrator | 2026-01-05 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:48.710471 | orchestrator | 2026-01-05 01:17:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:48.711790 | orchestrator | 2026-01-05 01:17:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:48.711849 | orchestrator | 2026-01-05 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:51.760153 | orchestrator | 2026-01-05 01:17:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:51.762643 | orchestrator | 2026-01-05 01:17:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:51.762709 | orchestrator | 2026-01-05 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:54.809544 | orchestrator | 2026-01-05 01:17:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:54.811279 | orchestrator | 2026-01-05 01:17:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:54.811333 | orchestrator | 2026-01-05 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:57.857552 | orchestrator | 2026-01-05 01:17:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:17:57.859223 | orchestrator | 2026-01-05 01:17:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:17:57.859285 | orchestrator | 2026-01-05 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:00.908434 | orchestrator | 2026-01-05 01:18:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:00.910996 | orchestrator | 2026-01-05 01:18:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:00.911093 | orchestrator | 2026-01-05 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:03.965859 | orchestrator | 2026-01-05 01:18:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:03.967840 | orchestrator | 2026-01-05 01:18:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:03.967904 | orchestrator | 2026-01-05 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:07.010344 | orchestrator | 2026-01-05 01:18:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:07.011347 | orchestrator | 2026-01-05 01:18:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:07.011442 | orchestrator | 2026-01-05 01:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:10.076147 | orchestrator | 2026-01-05 01:18:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:10.077935 | orchestrator | 2026-01-05 01:18:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:10.078174 | orchestrator | 2026-01-05 01:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:13.124862 | orchestrator | 2026-01-05 01:18:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:13.126411 | orchestrator | 2026-01-05 01:18:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:13.126474 | orchestrator | 2026-01-05 01:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:16.176900 | orchestrator | 2026-01-05 01:18:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:16.178665 | orchestrator | 2026-01-05 01:18:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:16.178727 | orchestrator | 2026-01-05 01:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:19.222953 | orchestrator | 2026-01-05 01:18:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:19.224760 | orchestrator | 2026-01-05 01:18:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:19.224833 | orchestrator | 2026-01-05 01:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:22.275277 | orchestrator | 2026-01-05 01:18:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:22.276999 | orchestrator | 2026-01-05 01:18:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:22.277121 | orchestrator | 2026-01-05 01:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:25.333261 | orchestrator | 2026-01-05 01:18:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:25.333936 | orchestrator | 2026-01-05 01:18:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:25.334064 | orchestrator | 2026-01-05 01:18:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:28.388092 | orchestrator | 2026-01-05 01:18:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:28.389327 | orchestrator | 2026-01-05 01:18:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:28.389431 | orchestrator | 2026-01-05 01:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:31.445843 | orchestrator | 2026-01-05 01:18:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:31.447381 | orchestrator | 2026-01-05 01:18:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:31.447526 | orchestrator | 2026-01-05 01:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:34.500741 | orchestrator | 2026-01-05 01:18:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:34.502262 | orchestrator | 2026-01-05 01:18:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:34.502328 | orchestrator | 2026-01-05 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:37.562664 | orchestrator | 2026-01-05 01:18:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:37.564341 | orchestrator | 2026-01-05 01:18:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:37.564402 | orchestrator | 2026-01-05 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:40.611890 | orchestrator | 2026-01-05 01:18:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:40.613506 | orchestrator | 2026-01-05 01:18:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:40.613612 | orchestrator | 2026-01-05 01:18:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:43.657963 | orchestrator | 2026-01-05 01:18:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:43.659638 | orchestrator | 2026-01-05 01:18:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:43.659715 | orchestrator | 2026-01-05 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:46.714108 | orchestrator | 2026-01-05 01:18:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:46.716680 | orchestrator | 2026-01-05 01:18:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:46.716741 | orchestrator | 2026-01-05 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:49.765802 | orchestrator | 2026-01-05 01:18:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:49.765909 | orchestrator | 2026-01-05 01:18:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:49.765926 | orchestrator | 2026-01-05 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:52.816800 | orchestrator | 2026-01-05 01:18:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:52.819633 | orchestrator | 2026-01-05 01:18:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:52.819692 | orchestrator | 2026-01-05 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:55.864612 | orchestrator | 2026-01-05 01:18:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:55.865211 | orchestrator | 2026-01-05 01:18:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:55.865458 | orchestrator | 2026-01-05 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:58.916123 | orchestrator | 2026-01-05 01:18:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:18:58.918164 | orchestrator | 2026-01-05 01:18:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:18:58.918191 | orchestrator | 2026-01-05 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:01.966192 | orchestrator | 2026-01-05 01:19:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:01.967574 | orchestrator | 2026-01-05 01:19:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:01.967703 | orchestrator | 2026-01-05 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:05.016542 | orchestrator | 2026-01-05 01:19:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:05.018430 | orchestrator | 2026-01-05 01:19:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:05.018496 | orchestrator | 2026-01-05 01:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:08.068520 | orchestrator | 2026-01-05 01:19:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:08.068657 | orchestrator | 2026-01-05 01:19:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:08.068755 | orchestrator | 2026-01-05 01:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:11.118372 | orchestrator | 2026-01-05 01:19:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:11.120796 | orchestrator | 2026-01-05 01:19:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:11.120903 | orchestrator | 2026-01-05 01:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:14.199104 | orchestrator | 2026-01-05 01:19:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:14.199952 | orchestrator | 2026-01-05 01:19:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:14.199977 | orchestrator | 2026-01-05 01:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:17.262708 | orchestrator | 2026-01-05 01:19:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:17.263225 | orchestrator | 2026-01-05 01:19:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:17.263399 | orchestrator | 2026-01-05 01:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:20.320583 | orchestrator | 2026-01-05 01:19:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:20.322762 | orchestrator | 2026-01-05 01:19:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:20.322879 | orchestrator | 2026-01-05 01:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:23.376519 | orchestrator | 2026-01-05 01:19:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:23.377379 | orchestrator | 2026-01-05 01:19:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:23.377426 | orchestrator | 2026-01-05 01:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:26.432927 | orchestrator | 2026-01-05 01:19:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:26.434403 | orchestrator | 2026-01-05 01:19:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:26.434459 | orchestrator | 2026-01-05 01:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:29.486315 | orchestrator | 2026-01-05 01:19:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:29.488448 | orchestrator | 2026-01-05 01:19:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:29.488528 | orchestrator | 2026-01-05 01:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:32.540487 | orchestrator | 2026-01-05 01:19:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:32.542871 | orchestrator | 2026-01-05 01:19:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:32.542941 | orchestrator | 2026-01-05 01:19:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:35.587716 | orchestrator | 2026-01-05 01:19:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:35.589617 | orchestrator | 2026-01-05 01:19:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:35.589685 | orchestrator | 2026-01-05 01:19:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:38.636088 | orchestrator | 2026-01-05 01:19:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:38.638084 | orchestrator | 2026-01-05 01:19:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:38.638258 | orchestrator | 2026-01-05 01:19:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:41.683486 | orchestrator | 2026-01-05 01:19:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:41.685080 | orchestrator | 2026-01-05 01:19:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:41.685218 | orchestrator | 2026-01-05 01:19:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:44.734271 | orchestrator | 2026-01-05 01:19:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:44.735777 | orchestrator | 2026-01-05 01:19:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:44.735885 | orchestrator | 2026-01-05 01:19:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:47.789178 | orchestrator | 2026-01-05 01:19:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:47.790997 | orchestrator | 2026-01-05 01:19:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:47.791066 | orchestrator | 2026-01-05 01:19:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:50.836336 | orchestrator | 2026-01-05 01:19:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:50.837057 | orchestrator | 2026-01-05 01:19:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:50.837145 | orchestrator | 2026-01-05 01:19:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:53.881998 | orchestrator | 2026-01-05 01:19:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:53.883436 | orchestrator | 2026-01-05 01:19:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:53.883488 | orchestrator | 2026-01-05 01:19:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:56.935493 | orchestrator | 2026-01-05 01:19:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:56.939077 | orchestrator | 2026-01-05 01:19:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:56.939163 | orchestrator | 2026-01-05 01:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:59.986374 | orchestrator | 2026-01-05 01:19:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:19:59.988040 | orchestrator | 2026-01-05 01:19:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:19:59.988109 | orchestrator | 2026-01-05 01:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:03.041286 | orchestrator | 2026-01-05 01:20:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:03.042401 | orchestrator | 2026-01-05 01:20:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:03.043462 | orchestrator | 2026-01-05 01:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:06.091221 | orchestrator | 2026-01-05 01:20:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:06.091749 | orchestrator | 2026-01-05 01:20:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:06.091854 | orchestrator | 2026-01-05 01:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:09.143520 | orchestrator | 2026-01-05 01:20:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:09.145041 | orchestrator | 2026-01-05 01:20:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:09.145092 | orchestrator | 2026-01-05 01:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:12.195727 | orchestrator | 2026-01-05 01:20:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:12.197597 | orchestrator | 2026-01-05 01:20:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:12.197652 | orchestrator | 2026-01-05 01:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:15.238288 | orchestrator | 2026-01-05 01:20:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:15.238895 | orchestrator | 2026-01-05 01:20:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:15.238949 | orchestrator | 2026-01-05 01:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:18.290555 | orchestrator | 2026-01-05 01:20:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:18.292922 | orchestrator | 2026-01-05 01:20:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:18.293004 | orchestrator | 2026-01-05 01:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:21.339585 | orchestrator | 2026-01-05 01:20:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:21.341919 | orchestrator | 2026-01-05 01:20:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:21.341971 | orchestrator | 2026-01-05 01:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:24.388701 | orchestrator | 2026-01-05 01:20:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:24.391692 | orchestrator | 2026-01-05 01:20:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:24.391773 | orchestrator | 2026-01-05 01:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:27.442942 | orchestrator | 2026-01-05 01:20:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:27.445999 | orchestrator | 2026-01-05 01:20:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:27.446190 | orchestrator | 2026-01-05 01:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:30.490786 | orchestrator | 2026-01-05 01:20:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:30.492516 | orchestrator | 2026-01-05 01:20:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:30.492606 | orchestrator | 2026-01-05 01:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:33.536987 | orchestrator | 2026-01-05 01:20:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:33.540439 | orchestrator | 2026-01-05 01:20:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:33.540524 | orchestrator | 2026-01-05 01:20:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:36.582739 | orchestrator | 2026-01-05 01:20:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:36.585278 | orchestrator | 2026-01-05 01:20:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:36.585328 | orchestrator | 2026-01-05 01:20:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:39.638514 | orchestrator | 2026-01-05 01:20:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:39.641873 | orchestrator | 2026-01-05 01:20:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:39.641939 | orchestrator | 2026-01-05 01:20:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:42.695764 | orchestrator | 2026-01-05 01:20:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:42.698992 | orchestrator | 2026-01-05 01:20:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:42.699047 | orchestrator | 2026-01-05 01:20:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:45.746223 | orchestrator | 2026-01-05 01:20:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:45.749979 | orchestrator | 2026-01-05 01:20:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:45.750111 | orchestrator | 2026-01-05 01:20:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:48.795584 | orchestrator | 2026-01-05 01:20:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:48.800572 | orchestrator | 2026-01-05 01:20:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:48.800651 | orchestrator | 2026-01-05 01:20:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:51.841689 | orchestrator | 2026-01-05 01:20:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:51.843394 | orchestrator | 2026-01-05 01:20:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:51.843468 | orchestrator | 2026-01-05 01:20:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:54.889874 | orchestrator | 2026-01-05 01:20:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:54.890858 | orchestrator | 2026-01-05 01:20:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:54.890909 | orchestrator | 2026-01-05 01:20:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:57.945044 | orchestrator | 2026-01-05 01:20:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:20:57.946765 | orchestrator | 2026-01-05 01:20:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:20:57.946871 | orchestrator | 2026-01-05 01:20:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:00.986867 | orchestrator | 2026-01-05 01:21:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:00.987910 | orchestrator | 2026-01-05 01:21:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:00.987958 | orchestrator | 2026-01-05 01:21:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:04.033948 | orchestrator | 2026-01-05 01:21:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:04.034400 | orchestrator | 2026-01-05 01:21:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:04.034455 | orchestrator | 2026-01-05 01:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:07.088890 | orchestrator | 2026-01-05 01:21:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:07.090675 | orchestrator | 2026-01-05 01:21:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:07.090750 | orchestrator | 2026-01-05 01:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:10.130621 | orchestrator | 2026-01-05 01:21:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:10.132297 | orchestrator | 2026-01-05 01:21:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:10.132331 | orchestrator | 2026-01-05 01:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:13.184127 | orchestrator | 2026-01-05 01:21:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:13.185117 | orchestrator | 2026-01-05 01:21:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:13.185164 | orchestrator | 2026-01-05 01:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:16.231032 | orchestrator | 2026-01-05 01:21:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:16.233315 | orchestrator | 2026-01-05 01:21:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:16.233376 | orchestrator | 2026-01-05 01:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:19.285217 | orchestrator | 2026-01-05 01:21:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:19.287278 | orchestrator | 2026-01-05 01:21:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:19.287369 | orchestrator | 2026-01-05 01:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:22.335480 | orchestrator | 2026-01-05 01:21:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:22.337776 | orchestrator | 2026-01-05 01:21:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:22.337954 | orchestrator | 2026-01-05 01:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:25.381334 | orchestrator | 2026-01-05 01:21:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:25.382556 | orchestrator | 2026-01-05 01:21:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:25.382606 | orchestrator | 2026-01-05 01:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:28.437562 | orchestrator | 2026-01-05 01:21:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:28.438695 | orchestrator | 2026-01-05 01:21:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:28.438820 | orchestrator | 2026-01-05 01:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:31.491776 | orchestrator | 2026-01-05 01:21:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:31.493261 | orchestrator | 2026-01-05 01:21:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:31.493311 | orchestrator | 2026-01-05 01:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:34.543360 | orchestrator | 2026-01-05 01:21:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:34.544441 | orchestrator | 2026-01-05 01:21:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:34.544479 | orchestrator | 2026-01-05 01:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:37.589779 | orchestrator | 2026-01-05 01:21:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:37.592578 | orchestrator | 2026-01-05 01:21:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:37.592674 | orchestrator | 2026-01-05 01:21:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:40.633505 | orchestrator | 2026-01-05 01:21:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:40.633607 | orchestrator | 2026-01-05 01:21:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:40.633618 | orchestrator | 2026-01-05 01:21:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:43.679461 | orchestrator | 2026-01-05 01:21:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:43.680362 | orchestrator | 2026-01-05 01:21:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:43.680455 | orchestrator | 2026-01-05 01:21:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:46.726199 | orchestrator | 2026-01-05 01:21:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:46.729677 | orchestrator | 2026-01-05 01:21:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:46.729783 | orchestrator | 2026-01-05 01:21:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:49.778885 | orchestrator | 2026-01-05 01:21:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:49.779615 | orchestrator | 2026-01-05 01:21:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:49.779654 | orchestrator | 2026-01-05 01:21:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:52.825722 | orchestrator | 2026-01-05 01:21:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:52.825857 | orchestrator | 2026-01-05 01:21:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:52.825870 | orchestrator | 2026-01-05 01:21:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:55.877488 | orchestrator | 2026-01-05 01:21:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:55.879175 | orchestrator | 2026-01-05 01:21:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:55.879258 | orchestrator | 2026-01-05 01:21:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:58.933577 | orchestrator | 2026-01-05 01:21:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:21:58.935179 | orchestrator | 2026-01-05 01:21:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:21:58.935245 | orchestrator | 2026-01-05 01:21:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:01.989778 | orchestrator | 2026-01-05 01:22:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:01.993566 | orchestrator | 2026-01-05 01:22:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:01.993659 | orchestrator | 2026-01-05 01:22:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:05.052961 | orchestrator | 2026-01-05 01:22:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:05.055633 | orchestrator | 2026-01-05 01:22:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:05.055712 | orchestrator | 2026-01-05 01:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:08.101320 | orchestrator | 2026-01-05 01:22:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:08.102173 | orchestrator | 2026-01-05 01:22:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:08.102231 | orchestrator | 2026-01-05 01:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:11.154117 | orchestrator | 2026-01-05 01:22:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:11.156184 | orchestrator | 2026-01-05 01:22:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:11.156245 | orchestrator | 2026-01-05 01:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:14.208647 | orchestrator | 2026-01-05 01:22:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:14.209633 | orchestrator | 2026-01-05 01:22:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:14.209695 | orchestrator | 2026-01-05 01:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:17.258934 | orchestrator | 2026-01-05 01:22:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:17.260714 | orchestrator | 2026-01-05 01:22:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:17.260782 | orchestrator | 2026-01-05 01:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:20.310626 | orchestrator | 2026-01-05 01:22:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:20.313854 | orchestrator | 2026-01-05 01:22:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:20.313953 | orchestrator | 2026-01-05 01:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:23.358340 | orchestrator | 2026-01-05 01:22:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:23.359490 | orchestrator | 2026-01-05 01:22:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:23.359552 | orchestrator | 2026-01-05 01:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:26.402399 | orchestrator | 2026-01-05 01:22:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:26.403814 | orchestrator | 2026-01-05 01:22:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:26.403870 | orchestrator | 2026-01-05 01:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:29.454238 | orchestrator | 2026-01-05 01:22:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:29.456831 | orchestrator | 2026-01-05 01:22:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:29.456913 | orchestrator | 2026-01-05 01:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:32.509563 | orchestrator | 2026-01-05 01:22:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:32.511334 | orchestrator | 2026-01-05 01:22:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:32.511418 | orchestrator | 2026-01-05 01:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:35.561773 | orchestrator | 2026-01-05 01:22:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:35.563508 | orchestrator | 2026-01-05 01:22:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:35.563587 | orchestrator | 2026-01-05 01:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:38.616474 | orchestrator | 2026-01-05 01:22:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:38.618606 | orchestrator | 2026-01-05 01:22:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:38.618671 | orchestrator | 2026-01-05 01:22:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:41.669613 | orchestrator | 2026-01-05 01:22:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:41.671173 | orchestrator | 2026-01-05 01:22:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:41.671246 | orchestrator | 2026-01-05 01:22:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:44.712989 | orchestrator | 2026-01-05 01:22:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:44.713965 | orchestrator | 2026-01-05 01:22:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:44.713985 | orchestrator | 2026-01-05 01:22:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:47.762289 | orchestrator | 2026-01-05 01:22:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:47.764025 | orchestrator | 2026-01-05 01:22:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:47.764084 | orchestrator | 2026-01-05 01:22:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:50.815593 | orchestrator | 2026-01-05 01:22:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:50.817836 | orchestrator | 2026-01-05 01:22:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:50.817894 | orchestrator | 2026-01-05 01:22:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:53.871860 | orchestrator | 2026-01-05 01:22:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:53.873077 | orchestrator | 2026-01-05 01:22:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:53.873107 | orchestrator | 2026-01-05 01:22:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:56.931449 | orchestrator | 2026-01-05 01:22:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:56.934002 | orchestrator | 2026-01-05 01:22:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:56.934123 | orchestrator | 2026-01-05 01:22:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:59.991599 | orchestrator | 2026-01-05 01:22:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:22:59.992874 | orchestrator | 2026-01-05 01:22:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:22:59.992945 | orchestrator | 2026-01-05 01:22:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:03.044423 | orchestrator | 2026-01-05 01:23:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:03.045667 | orchestrator | 2026-01-05 01:23:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:03.045870 | orchestrator | 2026-01-05 01:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:06.094975 | orchestrator | 2026-01-05 01:23:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:06.096995 | orchestrator | 2026-01-05 01:23:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:06.097342 | orchestrator | 2026-01-05 01:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:09.146163 | orchestrator | 2026-01-05 01:23:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:09.150461 | orchestrator | 2026-01-05 01:23:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:09.150533 | orchestrator | 2026-01-05 01:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:12.204008 | orchestrator | 2026-01-05 01:23:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:12.205569 | orchestrator | 2026-01-05 01:23:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:12.205633 | orchestrator | 2026-01-05 01:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:15.254794 | orchestrator | 2026-01-05 01:23:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:15.256564 | orchestrator | 2026-01-05 01:23:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:15.256618 | orchestrator | 2026-01-05 01:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:18.307983 | orchestrator | 2026-01-05 01:23:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:18.309075 | orchestrator | 2026-01-05 01:23:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:18.309280 | orchestrator | 2026-01-05 01:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:21.366188 | orchestrator | 2026-01-05 01:23:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:21.367350 | orchestrator | 2026-01-05 01:23:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:21.367393 | orchestrator | 2026-01-05 01:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:24.414221 | orchestrator | 2026-01-05 01:23:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:24.415650 | orchestrator | 2026-01-05 01:23:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:24.415702 | orchestrator | 2026-01-05 01:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:27.462915 | orchestrator | 2026-01-05 01:23:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:27.465597 | orchestrator | 2026-01-05 01:23:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:27.465872 | orchestrator | 2026-01-05 01:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:30.515948 | orchestrator | 2026-01-05 01:23:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:30.516791 | orchestrator | 2026-01-05 01:23:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:30.516874 | orchestrator | 2026-01-05 01:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:33.555223 | orchestrator | 2026-01-05 01:23:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:33.557092 | orchestrator | 2026-01-05 01:23:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:33.557161 | orchestrator | 2026-01-05 01:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:36.604795 | orchestrator | 2026-01-05 01:23:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:36.606303 | orchestrator | 2026-01-05 01:23:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:36.606361 | orchestrator | 2026-01-05 01:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:39.655678 | orchestrator | 2026-01-05 01:23:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:39.657181 | orchestrator | 2026-01-05 01:23:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:39.657400 | orchestrator | 2026-01-05 01:23:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:42.704311 | orchestrator | 2026-01-05 01:23:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:42.706837 | orchestrator | 2026-01-05 01:23:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:42.706921 | orchestrator | 2026-01-05 01:23:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:45.765096 | orchestrator | 2026-01-05 01:23:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:45.767221 | orchestrator | 2026-01-05 01:23:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:45.767300 | orchestrator | 2026-01-05 01:23:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:48.816784 | orchestrator | 2026-01-05 01:23:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:48.818570 | orchestrator | 2026-01-05 01:23:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:48.818643 | orchestrator | 2026-01-05 01:23:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:51.865271 | orchestrator | 2026-01-05 01:23:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:51.865889 | orchestrator | 2026-01-05 01:23:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:51.865991 | orchestrator | 2026-01-05 01:23:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:54.909307 | orchestrator | 2026-01-05 01:23:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:54.911126 | orchestrator | 2026-01-05 01:23:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:54.911184 | orchestrator | 2026-01-05 01:23:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:57.959598 | orchestrator | 2026-01-05 01:23:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:23:57.961312 | orchestrator | 2026-01-05 01:23:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:23:57.961348 | orchestrator | 2026-01-05 01:23:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:01.009579 | orchestrator | 2026-01-05 01:24:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:01.010696 | orchestrator | 2026-01-05 01:24:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:01.010733 | orchestrator | 2026-01-05 01:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:04.067927 | orchestrator | 2026-01-05 01:24:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:04.069816 | orchestrator | 2026-01-05 01:24:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:04.070157 | orchestrator | 2026-01-05 01:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:07.120394 | orchestrator | 2026-01-05 01:24:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:07.121429 | orchestrator | 2026-01-05 01:24:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:07.121460 | orchestrator | 2026-01-05 01:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:10.164408 | orchestrator | 2026-01-05 01:24:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:10.166398 | orchestrator | 2026-01-05 01:24:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:10.166494 | orchestrator | 2026-01-05 01:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:13.211366 | orchestrator | 2026-01-05 01:24:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:13.213610 | orchestrator | 2026-01-05 01:24:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:13.213721 | orchestrator | 2026-01-05 01:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:16.261391 | orchestrator | 2026-01-05 01:24:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:16.263030 | orchestrator | 2026-01-05 01:24:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:16.263109 | orchestrator | 2026-01-05 01:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:19.317308 | orchestrator | 2026-01-05 01:24:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:19.319004 | orchestrator | 2026-01-05 01:24:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:19.319937 | orchestrator | 2026-01-05 01:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:22.371409 | orchestrator | 2026-01-05 01:24:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:22.373404 | orchestrator | 2026-01-05 01:24:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:22.373469 | orchestrator | 2026-01-05 01:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:25.426113 | orchestrator | 2026-01-05 01:24:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:25.428133 | orchestrator | 2026-01-05 01:24:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:25.428196 | orchestrator | 2026-01-05 01:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:28.479687 | orchestrator | 2026-01-05 01:24:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:28.481266 | orchestrator | 2026-01-05 01:24:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:28.481325 | orchestrator | 2026-01-05 01:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:31.529777 | orchestrator | 2026-01-05 01:24:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:31.532831 | orchestrator | 2026-01-05 01:24:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:31.532893 | orchestrator | 2026-01-05 01:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:34.590374 | orchestrator | 2026-01-05 01:24:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:34.592500 | orchestrator | 2026-01-05 01:24:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:34.592556 | orchestrator | 2026-01-05 01:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:37.639067 | orchestrator | 2026-01-05 01:24:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:37.641030 | orchestrator | 2026-01-05 01:24:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:37.641077 | orchestrator | 2026-01-05 01:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:40.691413 | orchestrator | 2026-01-05 01:24:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:40.692072 | orchestrator | 2026-01-05 01:24:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:40.692100 | orchestrator | 2026-01-05 01:24:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:43.744348 | orchestrator | 2026-01-05 01:24:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:43.745441 | orchestrator | 2026-01-05 01:24:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:43.745806 | orchestrator | 2026-01-05 01:24:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:46.792342 | orchestrator | 2026-01-05 01:24:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:46.793529 | orchestrator | 2026-01-05 01:24:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:46.793637 | orchestrator | 2026-01-05 01:24:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:49.835756 | orchestrator | 2026-01-05 01:24:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:49.837149 | orchestrator | 2026-01-05 01:24:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:49.837266 | orchestrator | 2026-01-05 01:24:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:52.888461 | orchestrator | 2026-01-05 01:24:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:52.890442 | orchestrator | 2026-01-05 01:24:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:52.890496 | orchestrator | 2026-01-05 01:24:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:55.944207 | orchestrator | 2026-01-05 01:24:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:55.945689 | orchestrator | 2026-01-05 01:24:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:55.945723 | orchestrator | 2026-01-05 01:24:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:58.987485 | orchestrator | 2026-01-05 01:24:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:24:58.988697 | orchestrator | 2026-01-05 01:24:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:24:58.988828 | orchestrator | 2026-01-05 01:24:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:02.044076 | orchestrator | 2026-01-05 01:25:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:02.044888 | orchestrator | 2026-01-05 01:25:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:02.045011 | orchestrator | 2026-01-05 01:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:05.094464 | orchestrator | 2026-01-05 01:25:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:05.096751 | orchestrator | 2026-01-05 01:25:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:05.096942 | orchestrator | 2026-01-05 01:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:08.144679 | orchestrator | 2026-01-05 01:25:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:08.145233 | orchestrator | 2026-01-05 01:25:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:08.145352 | orchestrator | 2026-01-05 01:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:11.197742 | orchestrator | 2026-01-05 01:25:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:11.200110 | orchestrator | 2026-01-05 01:25:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:11.200163 | orchestrator | 2026-01-05 01:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:14.248690 | orchestrator | 2026-01-05 01:25:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:14.250295 | orchestrator | 2026-01-05 01:25:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:14.250345 | orchestrator | 2026-01-05 01:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:17.301083 | orchestrator | 2026-01-05 01:25:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:17.302260 | orchestrator | 2026-01-05 01:25:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:17.302306 | orchestrator | 2026-01-05 01:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:20.348487 | orchestrator | 2026-01-05 01:25:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:20.350577 | orchestrator | 2026-01-05 01:25:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:20.350624 | orchestrator | 2026-01-05 01:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:23.398871 | orchestrator | 2026-01-05 01:25:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:23.399712 | orchestrator | 2026-01-05 01:25:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:23.399822 | orchestrator | 2026-01-05 01:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:26.445469 | orchestrator | 2026-01-05 01:25:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:26.446670 | orchestrator | 2026-01-05 01:25:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:26.446703 | orchestrator | 2026-01-05 01:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:29.495206 | orchestrator | 2026-01-05 01:25:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:29.496650 | orchestrator | 2026-01-05 01:25:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:29.496710 | orchestrator | 2026-01-05 01:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:32.550337 | orchestrator | 2026-01-05 01:25:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:32.552946 | orchestrator | 2026-01-05 01:25:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:32.553094 | orchestrator | 2026-01-05 01:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:35.606279 | orchestrator | 2026-01-05 01:25:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:35.607590 | orchestrator | 2026-01-05 01:25:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:35.607657 | orchestrator | 2026-01-05 01:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:38.652124 | orchestrator | 2026-01-05 01:25:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:38.654494 | orchestrator | 2026-01-05 01:25:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:38.654548 | orchestrator | 2026-01-05 01:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:41.701141 | orchestrator | 2026-01-05 01:25:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:41.702107 | orchestrator | 2026-01-05 01:25:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:41.702140 | orchestrator | 2026-01-05 01:25:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:44.749171 | orchestrator | 2026-01-05 01:25:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:44.750137 | orchestrator | 2026-01-05 01:25:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:44.750173 | orchestrator | 2026-01-05 01:25:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:47.807149 | orchestrator | 2026-01-05 01:25:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:47.808996 | orchestrator | 2026-01-05 01:25:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:47.809046 | orchestrator | 2026-01-05 01:25:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:50.857280 | orchestrator | 2026-01-05 01:25:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:50.859540 | orchestrator | 2026-01-05 01:25:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:50.859613 | orchestrator | 2026-01-05 01:25:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:53.911142 | orchestrator | 2026-01-05 01:25:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:53.912786 | orchestrator | 2026-01-05 01:25:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:53.912834 | orchestrator | 2026-01-05 01:25:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:56.956020 | orchestrator | 2026-01-05 01:25:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:25:56.958968 | orchestrator | 2026-01-05 01:25:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:25:56.959051 | orchestrator | 2026-01-05 01:25:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:00.001403 | orchestrator | 2026-01-05 01:26:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:00.002813 | orchestrator | 2026-01-05 01:26:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:00.002888 | orchestrator | 2026-01-05 01:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:03.052065 | orchestrator | 2026-01-05 01:26:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:03.052920 | orchestrator | 2026-01-05 01:26:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:03.052945 | orchestrator | 2026-01-05 01:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:06.103255 | orchestrator | 2026-01-05 01:26:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:06.104013 | orchestrator | 2026-01-05 01:26:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:06.104047 | orchestrator | 2026-01-05 01:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:09.148331 | orchestrator | 2026-01-05 01:26:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:09.149911 | orchestrator | 2026-01-05 01:26:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:09.149971 | orchestrator | 2026-01-05 01:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:12.201751 | orchestrator | 2026-01-05 01:26:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:12.203350 | orchestrator | 2026-01-05 01:26:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:12.203432 | orchestrator | 2026-01-05 01:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:15.253120 | orchestrator | 2026-01-05 01:26:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:15.256136 | orchestrator | 2026-01-05 01:26:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:15.256254 | orchestrator | 2026-01-05 01:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:18.306246 | orchestrator | 2026-01-05 01:26:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:18.306740 | orchestrator | 2026-01-05 01:26:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:18.306778 | orchestrator | 2026-01-05 01:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:21.363659 | orchestrator | 2026-01-05 01:26:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:21.365806 | orchestrator | 2026-01-05 01:26:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:21.365902 | orchestrator | 2026-01-05 01:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:24.419344 | orchestrator | 2026-01-05 01:26:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:24.420710 | orchestrator | 2026-01-05 01:26:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:24.420789 | orchestrator | 2026-01-05 01:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:27.477026 | orchestrator | 2026-01-05 01:26:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:27.477961 | orchestrator | 2026-01-05 01:26:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:27.478065 | orchestrator | 2026-01-05 01:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:30.531251 | orchestrator | 2026-01-05 01:26:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:30.533945 | orchestrator | 2026-01-05 01:26:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:30.534119 | orchestrator | 2026-01-05 01:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:33.579779 | orchestrator | 2026-01-05 01:26:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:33.581481 | orchestrator | 2026-01-05 01:26:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:33.581593 | orchestrator | 2026-01-05 01:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:36.630773 | orchestrator | 2026-01-05 01:26:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:36.632463 | orchestrator | 2026-01-05 01:26:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:36.632506 | orchestrator | 2026-01-05 01:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:39.680373 | orchestrator | 2026-01-05 01:26:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:39.682625 | orchestrator | 2026-01-05 01:26:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:39.682719 | orchestrator | 2026-01-05 01:26:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:42.726728 | orchestrator | 2026-01-05 01:26:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:42.727986 | orchestrator | 2026-01-05 01:26:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:42.728065 | orchestrator | 2026-01-05 01:26:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:45.776902 | orchestrator | 2026-01-05 01:26:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:45.778095 | orchestrator | 2026-01-05 01:26:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:45.778152 | orchestrator | 2026-01-05 01:26:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:48.824597 | orchestrator | 2026-01-05 01:26:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:48.826477 | orchestrator | 2026-01-05 01:26:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:48.826697 | orchestrator | 2026-01-05 01:26:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:51.883036 | orchestrator | 2026-01-05 01:26:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:51.884793 | orchestrator | 2026-01-05 01:26:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:51.885066 | orchestrator | 2026-01-05 01:26:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:54.928263 | orchestrator | 2026-01-05 01:26:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:54.930976 | orchestrator | 2026-01-05 01:26:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:54.931081 | orchestrator | 2026-01-05 01:26:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:57.980640 | orchestrator | 2026-01-05 01:26:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:26:57.982376 | orchestrator | 2026-01-05 01:26:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:26:57.982814 | orchestrator | 2026-01-05 01:26:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:01.032582 | orchestrator | 2026-01-05 01:27:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:01.036468 | orchestrator | 2026-01-05 01:27:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:01.036627 | orchestrator | 2026-01-05 01:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:04.085384 | orchestrator | 2026-01-05 01:27:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:04.086681 | orchestrator | 2026-01-05 01:27:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:04.086792 | orchestrator | 2026-01-05 01:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:07.146281 | orchestrator | 2026-01-05 01:27:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:07.148826 | orchestrator | 2026-01-05 01:27:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:07.148916 | orchestrator | 2026-01-05 01:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:10.196888 | orchestrator | 2026-01-05 01:27:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:10.198524 | orchestrator | 2026-01-05 01:27:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:10.198560 | orchestrator | 2026-01-05 01:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:13.243647 | orchestrator | 2026-01-05 01:27:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:13.244907 | orchestrator | 2026-01-05 01:27:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:13.244990 | orchestrator | 2026-01-05 01:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:16.294130 | orchestrator | 2026-01-05 01:27:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:16.296159 | orchestrator | 2026-01-05 01:27:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:16.296265 | orchestrator | 2026-01-05 01:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:19.342393 | orchestrator | 2026-01-05 01:27:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:19.342812 | orchestrator | 2026-01-05 01:27:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:19.342836 | orchestrator | 2026-01-05 01:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:22.394484 | orchestrator | 2026-01-05 01:27:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:22.396515 | orchestrator | 2026-01-05 01:27:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:22.396573 | orchestrator | 2026-01-05 01:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:25.442917 | orchestrator | 2026-01-05 01:27:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:25.443868 | orchestrator | 2026-01-05 01:27:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:25.443904 | orchestrator | 2026-01-05 01:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:28.490340 | orchestrator | 2026-01-05 01:27:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:28.492776 | orchestrator | 2026-01-05 01:27:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:28.492859 | orchestrator | 2026-01-05 01:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:31.530194 | orchestrator | 2026-01-05 01:27:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:31.531420 | orchestrator | 2026-01-05 01:27:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:31.531498 | orchestrator | 2026-01-05 01:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:34.580415 | orchestrator | 2026-01-05 01:27:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:34.583079 | orchestrator | 2026-01-05 01:27:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:34.583202 | orchestrator | 2026-01-05 01:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:37.634959 | orchestrator | 2026-01-05 01:27:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:37.636330 | orchestrator | 2026-01-05 01:27:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:37.636446 | orchestrator | 2026-01-05 01:27:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:40.681183 | orchestrator | 2026-01-05 01:27:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:40.683244 | orchestrator | 2026-01-05 01:27:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:40.683309 | orchestrator | 2026-01-05 01:27:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:43.731336 | orchestrator | 2026-01-05 01:27:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:43.733246 | orchestrator | 2026-01-05 01:27:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:43.733342 | orchestrator | 2026-01-05 01:27:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:46.784411 | orchestrator | 2026-01-05 01:27:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:46.786798 | orchestrator | 2026-01-05 01:27:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:46.787019 | orchestrator | 2026-01-05 01:27:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:49.838376 | orchestrator | 2026-01-05 01:27:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:49.840073 | orchestrator | 2026-01-05 01:27:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:49.840136 | orchestrator | 2026-01-05 01:27:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:52.897350 | orchestrator | 2026-01-05 01:27:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:52.899057 | orchestrator | 2026-01-05 01:27:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:52.899161 | orchestrator | 2026-01-05 01:27:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:55.945214 | orchestrator | 2026-01-05 01:27:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:55.946864 | orchestrator | 2026-01-05 01:27:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:55.947043 | orchestrator | 2026-01-05 01:27:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:58.997019 | orchestrator | 2026-01-05 01:27:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:27:58.999647 | orchestrator | 2026-01-05 01:27:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:27:58.999780 | orchestrator | 2026-01-05 01:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:02.045485 | orchestrator | 2026-01-05 01:28:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:02.046340 | orchestrator | 2026-01-05 01:28:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:02.046447 | orchestrator | 2026-01-05 01:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:05.087000 | orchestrator | 2026-01-05 01:28:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:05.089163 | orchestrator | 2026-01-05 01:28:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:05.089218 | orchestrator | 2026-01-05 01:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:08.137799 | orchestrator | 2026-01-05 01:28:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:08.138879 | orchestrator | 2026-01-05 01:28:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:08.139126 | orchestrator | 2026-01-05 01:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:11.188440 | orchestrator | 2026-01-05 01:28:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:11.189890 | orchestrator | 2026-01-05 01:28:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:11.190169 | orchestrator | 2026-01-05 01:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:14.243401 | orchestrator | 2026-01-05 01:28:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:14.245140 | orchestrator | 2026-01-05 01:28:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:14.245740 | orchestrator | 2026-01-05 01:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:17.290400 | orchestrator | 2026-01-05 01:28:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:17.295011 | orchestrator | 2026-01-05 01:28:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:17.295095 | orchestrator | 2026-01-05 01:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:20.339469 | orchestrator | 2026-01-05 01:28:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:20.341013 | orchestrator | 2026-01-05 01:28:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:20.341101 | orchestrator | 2026-01-05 01:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:23.389536 | orchestrator | 2026-01-05 01:28:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:23.391168 | orchestrator | 2026-01-05 01:28:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:23.391318 | orchestrator | 2026-01-05 01:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:26.434585 | orchestrator | 2026-01-05 01:28:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:26.435757 | orchestrator | 2026-01-05 01:28:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:26.436125 | orchestrator | 2026-01-05 01:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:29.482438 | orchestrator | 2026-01-05 01:28:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:29.485450 | orchestrator | 2026-01-05 01:28:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:29.486198 | orchestrator | 2026-01-05 01:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:32.537758 | orchestrator | 2026-01-05 01:28:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:32.539109 | orchestrator | 2026-01-05 01:28:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:32.539138 | orchestrator | 2026-01-05 01:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:35.588539 | orchestrator | 2026-01-05 01:28:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:35.590397 | orchestrator | 2026-01-05 01:28:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:35.590440 | orchestrator | 2026-01-05 01:28:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:38.631728 | orchestrator | 2026-01-05 01:28:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:38.632417 | orchestrator | 2026-01-05 01:28:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:38.632441 | orchestrator | 2026-01-05 01:28:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:41.681722 | orchestrator | 2026-01-05 01:28:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:41.683750 | orchestrator | 2026-01-05 01:28:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:41.683815 | orchestrator | 2026-01-05 01:28:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:44.725680 | orchestrator | 2026-01-05 01:28:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:44.726498 | orchestrator | 2026-01-05 01:28:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:44.726542 | orchestrator | 2026-01-05 01:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:47.778242 | orchestrator | 2026-01-05 01:28:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:47.779380 | orchestrator | 2026-01-05 01:28:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:47.779516 | orchestrator | 2026-01-05 01:28:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:50.827944 | orchestrator | 2026-01-05 01:28:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:50.830666 | orchestrator | 2026-01-05 01:28:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:50.830964 | orchestrator | 2026-01-05 01:28:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:53.882789 | orchestrator | 2026-01-05 01:28:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:53.884133 | orchestrator | 2026-01-05 01:28:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:53.884186 | orchestrator | 2026-01-05 01:28:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:56.925806 | orchestrator | 2026-01-05 01:28:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:56.927303 | orchestrator | 2026-01-05 01:28:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:56.927349 | orchestrator | 2026-01-05 01:28:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:59.971510 | orchestrator | 2026-01-05 01:28:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:28:59.972980 | orchestrator | 2026-01-05 01:28:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:28:59.973024 | orchestrator | 2026-01-05 01:28:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:03.023479 | orchestrator | 2026-01-05 01:29:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:03.025799 | orchestrator | 2026-01-05 01:29:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:03.025893 | orchestrator | 2026-01-05 01:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:06.071768 | orchestrator | 2026-01-05 01:29:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:06.074308 | orchestrator | 2026-01-05 01:29:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:06.074370 | orchestrator | 2026-01-05 01:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:09.124551 | orchestrator | 2026-01-05 01:29:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:09.126567 | orchestrator | 2026-01-05 01:29:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:09.126699 | orchestrator | 2026-01-05 01:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:12.178475 | orchestrator | 2026-01-05 01:29:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:12.179818 | orchestrator | 2026-01-05 01:29:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:12.179874 | orchestrator | 2026-01-05 01:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:15.228270 | orchestrator | 2026-01-05 01:29:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:15.229256 | orchestrator | 2026-01-05 01:29:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:15.229289 | orchestrator | 2026-01-05 01:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:18.280177 | orchestrator | 2026-01-05 01:29:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:18.283376 | orchestrator | 2026-01-05 01:29:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:18.283480 | orchestrator | 2026-01-05 01:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:21.330962 | orchestrator | 2026-01-05 01:29:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:21.334345 | orchestrator | 2026-01-05 01:29:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:21.334417 | orchestrator | 2026-01-05 01:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:24.385535 | orchestrator | 2026-01-05 01:29:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:24.388241 | orchestrator | 2026-01-05 01:29:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:24.388312 | orchestrator | 2026-01-05 01:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:27.447552 | orchestrator | 2026-01-05 01:29:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:27.449776 | orchestrator | 2026-01-05 01:29:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:27.449900 | orchestrator | 2026-01-05 01:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:30.498361 | orchestrator | 2026-01-05 01:29:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:30.499151 | orchestrator | 2026-01-05 01:29:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:30.499252 | orchestrator | 2026-01-05 01:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:33.549333 | orchestrator | 2026-01-05 01:29:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:33.550539 | orchestrator | 2026-01-05 01:29:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:33.550645 | orchestrator | 2026-01-05 01:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:36.594481 | orchestrator | 2026-01-05 01:29:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:36.596687 | orchestrator | 2026-01-05 01:29:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:36.596949 | orchestrator | 2026-01-05 01:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:39.636025 | orchestrator | 2026-01-05 01:29:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:39.637549 | orchestrator | 2026-01-05 01:29:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:39.637580 | orchestrator | 2026-01-05 01:29:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:42.687599 | orchestrator | 2026-01-05 01:29:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:42.689104 | orchestrator | 2026-01-05 01:29:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:42.689252 | orchestrator | 2026-01-05 01:29:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:45.738672 | orchestrator | 2026-01-05 01:29:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:45.740747 | orchestrator | 2026-01-05 01:29:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:45.740813 | orchestrator | 2026-01-05 01:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:48.791783 | orchestrator | 2026-01-05 01:29:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:48.793509 | orchestrator | 2026-01-05 01:29:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:48.793562 | orchestrator | 2026-01-05 01:29:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:51.837395 | orchestrator | 2026-01-05 01:29:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:51.839663 | orchestrator | 2026-01-05 01:29:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:51.839725 | orchestrator | 2026-01-05 01:29:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:54.889410 | orchestrator | 2026-01-05 01:29:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:54.891152 | orchestrator | 2026-01-05 01:29:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:54.891291 | orchestrator | 2026-01-05 01:29:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:57.941282 | orchestrator | 2026-01-05 01:29:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:29:57.943343 | orchestrator | 2026-01-05 01:29:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:29:57.943457 | orchestrator | 2026-01-05 01:29:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:00.991784 | orchestrator | 2026-01-05 01:30:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:00.993593 | orchestrator | 2026-01-05 01:30:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:00.993657 | orchestrator | 2026-01-05 01:30:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:04.040135 | orchestrator | 2026-01-05 01:30:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:04.041673 | orchestrator | 2026-01-05 01:30:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:04.041727 | orchestrator | 2026-01-05 01:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:07.090546 | orchestrator | 2026-01-05 01:30:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:07.090917 | orchestrator | 2026-01-05 01:30:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:07.091017 | orchestrator | 2026-01-05 01:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:10.136524 | orchestrator | 2026-01-05 01:30:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:10.137093 | orchestrator | 2026-01-05 01:30:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:10.137111 | orchestrator | 2026-01-05 01:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:13.191198 | orchestrator | 2026-01-05 01:30:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:13.193076 | orchestrator | 2026-01-05 01:30:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:13.193118 | orchestrator | 2026-01-05 01:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:16.240704 | orchestrator | 2026-01-05 01:30:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:16.245161 | orchestrator | 2026-01-05 01:30:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:16.245244 | orchestrator | 2026-01-05 01:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:19.288379 | orchestrator | 2026-01-05 01:30:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:19.291199 | orchestrator | 2026-01-05 01:30:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:19.291354 | orchestrator | 2026-01-05 01:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:22.338439 | orchestrator | 2026-01-05 01:30:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:22.339811 | orchestrator | 2026-01-05 01:30:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:22.339895 | orchestrator | 2026-01-05 01:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:25.384235 | orchestrator | 2026-01-05 01:30:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:25.384851 | orchestrator | 2026-01-05 01:30:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:25.384893 | orchestrator | 2026-01-05 01:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:28.439716 | orchestrator | 2026-01-05 01:30:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:28.440562 | orchestrator | 2026-01-05 01:30:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:28.440616 | orchestrator | 2026-01-05 01:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:31.495199 | orchestrator | 2026-01-05 01:30:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:31.496879 | orchestrator | 2026-01-05 01:30:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:31.496925 | orchestrator | 2026-01-05 01:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:34.544708 | orchestrator | 2026-01-05 01:30:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:34.547824 | orchestrator | 2026-01-05 01:30:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:34.547907 | orchestrator | 2026-01-05 01:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:37.597556 | orchestrator | 2026-01-05 01:30:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:37.599773 | orchestrator | 2026-01-05 01:30:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:37.599990 | orchestrator | 2026-01-05 01:30:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:40.645774 | orchestrator | 2026-01-05 01:30:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:40.646002 | orchestrator | 2026-01-05 01:30:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:40.646068 | orchestrator | 2026-01-05 01:30:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:43.688960 | orchestrator | 2026-01-05 01:30:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:43.690691 | orchestrator | 2026-01-05 01:30:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:43.690755 | orchestrator | 2026-01-05 01:30:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:46.748173 | orchestrator | 2026-01-05 01:30:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:46.749118 | orchestrator | 2026-01-05 01:30:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:46.749239 | orchestrator | 2026-01-05 01:30:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:49.796669 | orchestrator | 2026-01-05 01:30:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:49.799066 | orchestrator | 2026-01-05 01:30:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:49.799131 | orchestrator | 2026-01-05 01:30:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:52.843048 | orchestrator | 2026-01-05 01:30:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:52.843276 | orchestrator | 2026-01-05 01:30:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:52.843303 | orchestrator | 2026-01-05 01:30:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:55.893479 | orchestrator | 2026-01-05 01:30:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:55.895012 | orchestrator | 2026-01-05 01:30:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:55.895952 | orchestrator | 2026-01-05 01:30:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:58.947882 | orchestrator | 2026-01-05 01:30:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:30:58.949424 | orchestrator | 2026-01-05 01:30:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:30:58.949507 | orchestrator | 2026-01-05 01:30:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:02.000700 | orchestrator | 2026-01-05 01:31:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:02.001749 | orchestrator | 2026-01-05 01:31:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:02.001857 | orchestrator | 2026-01-05 01:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:05.056159 | orchestrator | 2026-01-05 01:31:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:05.056859 | orchestrator | 2026-01-05 01:31:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:05.056896 | orchestrator | 2026-01-05 01:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:08.102642 | orchestrator | 2026-01-05 01:31:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:08.103515 | orchestrator | 2026-01-05 01:31:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:08.103588 | orchestrator | 2026-01-05 01:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:11.162564 | orchestrator | 2026-01-05 01:31:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:11.163727 | orchestrator | 2026-01-05 01:31:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:11.163787 | orchestrator | 2026-01-05 01:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:14.213887 | orchestrator | 2026-01-05 01:31:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:14.215269 | orchestrator | 2026-01-05 01:31:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:14.215396 | orchestrator | 2026-01-05 01:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:17.266100 | orchestrator | 2026-01-05 01:31:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:17.268802 | orchestrator | 2026-01-05 01:31:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:17.268917 | orchestrator | 2026-01-05 01:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:20.318113 | orchestrator | 2026-01-05 01:31:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:20.320651 | orchestrator | 2026-01-05 01:31:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:20.320782 | orchestrator | 2026-01-05 01:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:23.363271 | orchestrator | 2026-01-05 01:31:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:23.366547 | orchestrator | 2026-01-05 01:31:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:23.366780 | orchestrator | 2026-01-05 01:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:26.411999 | orchestrator | 2026-01-05 01:31:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:26.414692 | orchestrator | 2026-01-05 01:31:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:26.414822 | orchestrator | 2026-01-05 01:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:29.463568 | orchestrator | 2026-01-05 01:31:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:29.465838 | orchestrator | 2026-01-05 01:31:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:29.465901 | orchestrator | 2026-01-05 01:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:32.510179 | orchestrator | 2026-01-05 01:31:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:32.511784 | orchestrator | 2026-01-05 01:31:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:32.512398 | orchestrator | 2026-01-05 01:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:35.564216 | orchestrator | 2026-01-05 01:31:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:35.564315 | orchestrator | 2026-01-05 01:31:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:35.564322 | orchestrator | 2026-01-05 01:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:38.606954 | orchestrator | 2026-01-05 01:31:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:38.608622 | orchestrator | 2026-01-05 01:31:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:38.608745 | orchestrator | 2026-01-05 01:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:41.656823 | orchestrator | 2026-01-05 01:31:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:41.658611 | orchestrator | 2026-01-05 01:31:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:41.658701 | orchestrator | 2026-01-05 01:31:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:44.750865 | orchestrator | 2026-01-05 01:31:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:44.753624 | orchestrator | 2026-01-05 01:31:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:44.753702 | orchestrator | 2026-01-05 01:31:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:47.801058 | orchestrator | 2026-01-05 01:31:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:47.804076 | orchestrator | 2026-01-05 01:31:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:47.804169 | orchestrator | 2026-01-05 01:31:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:50.854140 | orchestrator | 2026-01-05 01:31:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:50.856744 | orchestrator | 2026-01-05 01:31:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:50.856805 | orchestrator | 2026-01-05 01:31:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:53.904141 | orchestrator | 2026-01-05 01:31:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:53.906846 | orchestrator | 2026-01-05 01:31:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:53.906927 | orchestrator | 2026-01-05 01:31:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:56.958865 | orchestrator | 2026-01-05 01:31:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:31:56.959895 | orchestrator | 2026-01-05 01:31:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:31:56.959942 | orchestrator | 2026-01-05 01:31:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:00.009125 | orchestrator | 2026-01-05 01:32:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:00.010681 | orchestrator | 2026-01-05 01:32:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:00.010757 | orchestrator | 2026-01-05 01:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:03.070148 | orchestrator | 2026-01-05 01:32:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:03.071010 | orchestrator | 2026-01-05 01:32:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:03.071058 | orchestrator | 2026-01-05 01:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:06.126242 | orchestrator | 2026-01-05 01:32:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:06.128921 | orchestrator | 2026-01-05 01:32:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:06.128994 | orchestrator | 2026-01-05 01:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:09.172842 | orchestrator | 2026-01-05 01:32:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:09.173450 | orchestrator | 2026-01-05 01:32:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:09.173884 | orchestrator | 2026-01-05 01:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:12.222789 | orchestrator | 2026-01-05 01:32:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:12.224116 | orchestrator | 2026-01-05 01:32:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:12.224169 | orchestrator | 2026-01-05 01:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:15.269930 | orchestrator | 2026-01-05 01:32:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:15.270284 | orchestrator | 2026-01-05 01:32:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:15.270315 | orchestrator | 2026-01-05 01:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:18.320935 | orchestrator | 2026-01-05 01:32:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:18.321916 | orchestrator | 2026-01-05 01:32:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:18.321948 | orchestrator | 2026-01-05 01:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:21.374104 | orchestrator | 2026-01-05 01:32:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:21.375556 | orchestrator | 2026-01-05 01:32:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:21.375657 | orchestrator | 2026-01-05 01:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:24.425883 | orchestrator | 2026-01-05 01:32:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:24.428935 | orchestrator | 2026-01-05 01:32:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:24.429016 | orchestrator | 2026-01-05 01:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:27.474137 | orchestrator | 2026-01-05 01:32:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:27.476239 | orchestrator | 2026-01-05 01:32:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:27.476370 | orchestrator | 2026-01-05 01:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:30.521428 | orchestrator | 2026-01-05 01:32:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:30.522909 | orchestrator | 2026-01-05 01:32:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:30.522999 | orchestrator | 2026-01-05 01:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:33.575175 | orchestrator | 2026-01-05 01:32:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:33.577354 | orchestrator | 2026-01-05 01:32:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:33.577403 | orchestrator | 2026-01-05 01:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:36.629524 | orchestrator | 2026-01-05 01:32:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:36.631510 | orchestrator | 2026-01-05 01:32:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:36.631578 | orchestrator | 2026-01-05 01:32:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:39.677773 | orchestrator | 2026-01-05 01:32:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:39.679422 | orchestrator | 2026-01-05 01:32:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:39.679523 | orchestrator | 2026-01-05 01:32:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:42.731481 | orchestrator | 2026-01-05 01:32:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:42.734461 | orchestrator | 2026-01-05 01:32:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:42.734527 | orchestrator | 2026-01-05 01:32:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:45.786232 | orchestrator | 2026-01-05 01:32:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:45.787914 | orchestrator | 2026-01-05 01:32:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:45.787978 | orchestrator | 2026-01-05 01:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:48.837285 | orchestrator | 2026-01-05 01:32:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:48.840104 | orchestrator | 2026-01-05 01:32:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:48.840182 | orchestrator | 2026-01-05 01:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:51.890221 | orchestrator | 2026-01-05 01:32:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:51.892210 | orchestrator | 2026-01-05 01:32:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:51.892267 | orchestrator | 2026-01-05 01:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:54.934294 | orchestrator | 2026-01-05 01:32:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:54.936243 | orchestrator | 2026-01-05 01:32:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:54.936279 | orchestrator | 2026-01-05 01:32:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:57.978445 | orchestrator | 2026-01-05 01:32:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:32:57.980396 | orchestrator | 2026-01-05 01:32:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:32:57.980464 | orchestrator | 2026-01-05 01:32:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:01.029547 | orchestrator | 2026-01-05 01:33:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:01.031259 | orchestrator | 2026-01-05 01:33:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:01.031301 | orchestrator | 2026-01-05 01:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:04.074139 | orchestrator | 2026-01-05 01:33:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:04.076208 | orchestrator | 2026-01-05 01:33:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:04.076271 | orchestrator | 2026-01-05 01:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:07.117224 | orchestrator | 2026-01-05 01:33:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:07.119055 | orchestrator | 2026-01-05 01:33:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:07.119131 | orchestrator | 2026-01-05 01:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:10.174574 | orchestrator | 2026-01-05 01:33:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:10.176587 | orchestrator | 2026-01-05 01:33:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:10.176899 | orchestrator | 2026-01-05 01:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:13.227621 | orchestrator | 2026-01-05 01:33:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:13.230488 | orchestrator | 2026-01-05 01:33:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:13.230558 | orchestrator | 2026-01-05 01:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:16.282839 | orchestrator | 2026-01-05 01:33:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:16.285739 | orchestrator | 2026-01-05 01:33:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:16.286006 | orchestrator | 2026-01-05 01:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:19.335146 | orchestrator | 2026-01-05 01:33:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:19.336825 | orchestrator | 2026-01-05 01:33:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:19.336880 | orchestrator | 2026-01-05 01:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:22.383927 | orchestrator | 2026-01-05 01:33:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:22.385471 | orchestrator | 2026-01-05 01:33:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:22.385534 | orchestrator | 2026-01-05 01:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:25.436958 | orchestrator | 2026-01-05 01:33:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:25.438494 | orchestrator | 2026-01-05 01:33:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:25.438570 | orchestrator | 2026-01-05 01:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:28.488304 | orchestrator | 2026-01-05 01:33:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:28.490453 | orchestrator | 2026-01-05 01:33:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:28.490562 | orchestrator | 2026-01-05 01:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:31.538088 | orchestrator | 2026-01-05 01:33:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:31.539741 | orchestrator | 2026-01-05 01:33:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:31.539786 | orchestrator | 2026-01-05 01:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:34.590215 | orchestrator | 2026-01-05 01:33:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:34.590558 | orchestrator | 2026-01-05 01:33:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:34.591252 | orchestrator | 2026-01-05 01:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:37.638093 | orchestrator | 2026-01-05 01:33:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:37.639228 | orchestrator | 2026-01-05 01:33:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:37.639264 | orchestrator | 2026-01-05 01:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:40.689704 | orchestrator | 2026-01-05 01:33:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:40.691247 | orchestrator | 2026-01-05 01:33:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:40.691298 | orchestrator | 2026-01-05 01:33:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:43.732337 | orchestrator | 2026-01-05 01:33:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:43.733546 | orchestrator | 2026-01-05 01:33:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:43.733612 | orchestrator | 2026-01-05 01:33:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:46.785388 | orchestrator | 2026-01-05 01:33:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:46.787582 | orchestrator | 2026-01-05 01:33:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:46.787632 | orchestrator | 2026-01-05 01:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:49.835926 | orchestrator | 2026-01-05 01:33:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:49.837182 | orchestrator | 2026-01-05 01:33:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:49.837245 | orchestrator | 2026-01-05 01:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:52.890125 | orchestrator | 2026-01-05 01:33:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:52.891918 | orchestrator | 2026-01-05 01:33:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:52.892198 | orchestrator | 2026-01-05 01:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:55.937145 | orchestrator | 2026-01-05 01:33:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:55.938363 | orchestrator | 2026-01-05 01:33:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:55.938400 | orchestrator | 2026-01-05 01:33:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:58.979413 | orchestrator | 2026-01-05 01:33:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:33:58.981210 | orchestrator | 2026-01-05 01:33:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:33:58.981262 | orchestrator | 2026-01-05 01:33:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:02.029089 | orchestrator | 2026-01-05 01:34:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:02.031445 | orchestrator | 2026-01-05 01:34:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:02.031506 | orchestrator | 2026-01-05 01:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:05.075146 | orchestrator | 2026-01-05 01:34:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:05.076894 | orchestrator | 2026-01-05 01:34:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:05.076951 | orchestrator | 2026-01-05 01:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:08.120074 | orchestrator | 2026-01-05 01:34:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:08.120291 | orchestrator | 2026-01-05 01:34:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:08.120377 | orchestrator | 2026-01-05 01:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:11.172647 | orchestrator | 2026-01-05 01:34:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:11.174409 | orchestrator | 2026-01-05 01:34:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:11.174516 | orchestrator | 2026-01-05 01:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:14.224796 | orchestrator | 2026-01-05 01:34:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:14.226350 | orchestrator | 2026-01-05 01:34:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:14.226433 | orchestrator | 2026-01-05 01:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:17.267226 | orchestrator | 2026-01-05 01:34:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:17.268553 | orchestrator | 2026-01-05 01:34:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:17.268587 | orchestrator | 2026-01-05 01:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:20.318291 | orchestrator | 2026-01-05 01:34:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:20.321503 | orchestrator | 2026-01-05 01:34:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:20.321588 | orchestrator | 2026-01-05 01:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:23.373205 | orchestrator | 2026-01-05 01:34:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:23.375202 | orchestrator | 2026-01-05 01:34:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:23.375262 | orchestrator | 2026-01-05 01:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:26.419392 | orchestrator | 2026-01-05 01:34:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:26.421782 | orchestrator | 2026-01-05 01:34:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:26.421958 | orchestrator | 2026-01-05 01:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:29.474399 | orchestrator | 2026-01-05 01:34:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:29.476191 | orchestrator | 2026-01-05 01:34:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:29.476239 | orchestrator | 2026-01-05 01:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:32.518341 | orchestrator | 2026-01-05 01:34:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:32.520355 | orchestrator | 2026-01-05 01:34:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:32.520465 | orchestrator | 2026-01-05 01:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:35.567889 | orchestrator | 2026-01-05 01:34:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:35.568842 | orchestrator | 2026-01-05 01:34:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:35.568897 | orchestrator | 2026-01-05 01:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:38.620174 | orchestrator | 2026-01-05 01:34:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:38.620828 | orchestrator | 2026-01-05 01:34:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:38.620897 | orchestrator | 2026-01-05 01:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:41.663344 | orchestrator | 2026-01-05 01:34:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:41.665008 | orchestrator | 2026-01-05 01:34:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:41.665113 | orchestrator | 2026-01-05 01:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:44.714822 | orchestrator | 2026-01-05 01:34:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:44.715222 | orchestrator | 2026-01-05 01:34:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:44.715252 | orchestrator | 2026-01-05 01:34:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:47.765488 | orchestrator | 2026-01-05 01:34:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:47.767781 | orchestrator | 2026-01-05 01:34:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:47.767842 | orchestrator | 2026-01-05 01:34:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:50.812506 | orchestrator | 2026-01-05 01:34:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:50.813707 | orchestrator | 2026-01-05 01:34:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:50.813764 | orchestrator | 2026-01-05 01:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:53.864361 | orchestrator | 2026-01-05 01:34:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:53.865670 | orchestrator | 2026-01-05 01:34:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:53.865716 | orchestrator | 2026-01-05 01:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:56.917271 | orchestrator | 2026-01-05 01:34:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:56.919531 | orchestrator | 2026-01-05 01:34:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:56.919582 | orchestrator | 2026-01-05 01:34:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:59.971120 | orchestrator | 2026-01-05 01:34:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:34:59.972147 | orchestrator | 2026-01-05 01:34:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:34:59.972204 | orchestrator | 2026-01-05 01:34:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:03.015703 | orchestrator | 2026-01-05 01:35:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:03.018199 | orchestrator | 2026-01-05 01:35:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:03.018259 | orchestrator | 2026-01-05 01:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:06.064539 | orchestrator | 2026-01-05 01:35:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:06.066568 | orchestrator | 2026-01-05 01:35:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:06.066614 | orchestrator | 2026-01-05 01:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:09.116503 | orchestrator | 2026-01-05 01:35:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:09.118942 | orchestrator | 2026-01-05 01:35:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:09.119061 | orchestrator | 2026-01-05 01:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:12.171622 | orchestrator | 2026-01-05 01:35:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:12.174276 | orchestrator | 2026-01-05 01:35:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:12.174380 | orchestrator | 2026-01-05 01:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:15.216750 | orchestrator | 2026-01-05 01:35:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:15.220334 | orchestrator | 2026-01-05 01:35:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:15.220427 | orchestrator | 2026-01-05 01:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:18.260633 | orchestrator | 2026-01-05 01:35:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:18.261141 | orchestrator | 2026-01-05 01:35:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:18.261171 | orchestrator | 2026-01-05 01:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:21.303420 | orchestrator | 2026-01-05 01:35:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:21.304798 | orchestrator | 2026-01-05 01:35:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:21.304814 | orchestrator | 2026-01-05 01:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:24.349084 | orchestrator | 2026-01-05 01:35:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:24.350174 | orchestrator | 2026-01-05 01:35:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:24.350343 | orchestrator | 2026-01-05 01:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:27.400100 | orchestrator | 2026-01-05 01:35:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:27.402116 | orchestrator | 2026-01-05 01:35:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:27.402174 | orchestrator | 2026-01-05 01:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:30.447228 | orchestrator | 2026-01-05 01:35:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:30.447972 | orchestrator | 2026-01-05 01:35:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:30.448035 | orchestrator | 2026-01-05 01:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:33.498888 | orchestrator | 2026-01-05 01:35:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:33.501285 | orchestrator | 2026-01-05 01:35:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:33.501342 | orchestrator | 2026-01-05 01:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:36.546865 | orchestrator | 2026-01-05 01:35:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:36.548316 | orchestrator | 2026-01-05 01:35:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:36.548390 | orchestrator | 2026-01-05 01:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:39.596500 | orchestrator | 2026-01-05 01:35:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:39.598339 | orchestrator | 2026-01-05 01:35:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:39.598391 | orchestrator | 2026-01-05 01:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:42.638728 | orchestrator | 2026-01-05 01:35:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:42.642788 | orchestrator | 2026-01-05 01:35:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:42.642834 | orchestrator | 2026-01-05 01:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:45.691093 | orchestrator | 2026-01-05 01:35:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:45.692803 | orchestrator | 2026-01-05 01:35:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:45.692873 | orchestrator | 2026-01-05 01:35:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:48.740943 | orchestrator | 2026-01-05 01:35:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:48.743883 | orchestrator | 2026-01-05 01:35:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:48.743939 | orchestrator | 2026-01-05 01:35:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:51.792197 | orchestrator | 2026-01-05 01:35:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:51.793347 | orchestrator | 2026-01-05 01:35:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:51.793384 | orchestrator | 2026-01-05 01:35:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:54.838518 | orchestrator | 2026-01-05 01:35:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:54.840501 | orchestrator | 2026-01-05 01:35:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:54.840579 | orchestrator | 2026-01-05 01:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:57.882761 | orchestrator | 2026-01-05 01:35:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:35:57.884410 | orchestrator | 2026-01-05 01:35:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:35:57.884595 | orchestrator | 2026-01-05 01:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:00.931442 | orchestrator | 2026-01-05 01:36:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:00.933576 | orchestrator | 2026-01-05 01:36:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:00.933625 | orchestrator | 2026-01-05 01:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:03.974394 | orchestrator | 2026-01-05 01:36:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:03.977175 | orchestrator | 2026-01-05 01:36:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:03.977229 | orchestrator | 2026-01-05 01:36:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:07.030740 | orchestrator | 2026-01-05 01:36:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:07.032469 | orchestrator | 2026-01-05 01:36:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:07.032694 | orchestrator | 2026-01-05 01:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:10.083679 | orchestrator | 2026-01-05 01:36:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:10.085445 | orchestrator | 2026-01-05 01:36:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:10.085476 | orchestrator | 2026-01-05 01:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:13.133692 | orchestrator | 2026-01-05 01:36:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:13.135340 | orchestrator | 2026-01-05 01:36:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:13.135494 | orchestrator | 2026-01-05 01:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:16.188599 | orchestrator | 2026-01-05 01:36:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:16.190985 | orchestrator | 2026-01-05 01:36:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:16.191435 | orchestrator | 2026-01-05 01:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:19.240900 | orchestrator | 2026-01-05 01:36:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:19.243551 | orchestrator | 2026-01-05 01:36:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:19.243621 | orchestrator | 2026-01-05 01:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:22.295629 | orchestrator | 2026-01-05 01:36:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:22.297644 | orchestrator | 2026-01-05 01:36:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:22.297707 | orchestrator | 2026-01-05 01:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:25.349392 | orchestrator | 2026-01-05 01:36:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:25.351475 | orchestrator | 2026-01-05 01:36:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:25.351546 | orchestrator | 2026-01-05 01:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:28.402780 | orchestrator | 2026-01-05 01:36:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:28.404088 | orchestrator | 2026-01-05 01:36:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:28.404119 | orchestrator | 2026-01-05 01:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:31.449850 | orchestrator | 2026-01-05 01:36:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:31.451462 | orchestrator | 2026-01-05 01:36:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:31.451559 | orchestrator | 2026-01-05 01:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:34.497519 | orchestrator | 2026-01-05 01:36:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:34.499964 | orchestrator | 2026-01-05 01:36:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:34.500109 | orchestrator | 2026-01-05 01:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:37.552968 | orchestrator | 2026-01-05 01:36:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:37.554883 | orchestrator | 2026-01-05 01:36:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:37.554944 | orchestrator | 2026-01-05 01:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:40.604220 | orchestrator | 2026-01-05 01:36:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:40.606345 | orchestrator | 2026-01-05 01:36:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:40.606395 | orchestrator | 2026-01-05 01:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:43.660613 | orchestrator | 2026-01-05 01:36:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:43.662781 | orchestrator | 2026-01-05 01:36:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:43.662862 | orchestrator | 2026-01-05 01:36:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:46.708858 | orchestrator | 2026-01-05 01:36:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:46.710924 | orchestrator | 2026-01-05 01:36:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:46.710981 | orchestrator | 2026-01-05 01:36:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:49.766962 | orchestrator | 2026-01-05 01:36:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:49.769197 | orchestrator | 2026-01-05 01:36:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:49.769249 | orchestrator | 2026-01-05 01:36:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:52.820117 | orchestrator | 2026-01-05 01:36:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:52.822363 | orchestrator | 2026-01-05 01:36:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:52.822412 | orchestrator | 2026-01-05 01:36:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:55.870309 | orchestrator | 2026-01-05 01:36:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:55.871566 | orchestrator | 2026-01-05 01:36:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:55.871600 | orchestrator | 2026-01-05 01:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:58.914676 | orchestrator | 2026-01-05 01:36:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:36:58.916971 | orchestrator | 2026-01-05 01:36:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:36:58.917410 | orchestrator | 2026-01-05 01:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:01.963914 | orchestrator | 2026-01-05 01:37:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:01.966169 | orchestrator | 2026-01-05 01:37:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:01.966225 | orchestrator | 2026-01-05 01:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:05.012373 | orchestrator | 2026-01-05 01:37:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:05.013732 | orchestrator | 2026-01-05 01:37:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:05.013783 | orchestrator | 2026-01-05 01:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:08.054257 | orchestrator | 2026-01-05 01:37:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:08.054846 | orchestrator | 2026-01-05 01:37:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:08.055549 | orchestrator | 2026-01-05 01:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:11.102599 | orchestrator | 2026-01-05 01:37:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:11.104908 | orchestrator | 2026-01-05 01:37:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:11.105040 | orchestrator | 2026-01-05 01:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:14.149001 | orchestrator | 2026-01-05 01:37:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:14.151402 | orchestrator | 2026-01-05 01:37:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:14.151455 | orchestrator | 2026-01-05 01:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:17.204318 | orchestrator | 2026-01-05 01:37:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:17.206152 | orchestrator | 2026-01-05 01:37:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:17.312811 | orchestrator | 2026-01-05 01:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:20.248511 | orchestrator | 2026-01-05 01:37:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:20.249644 | orchestrator | 2026-01-05 01:37:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:20.249725 | orchestrator | 2026-01-05 01:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:23.298942 | orchestrator | 2026-01-05 01:37:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:23.300044 | orchestrator | 2026-01-05 01:37:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:23.300134 | orchestrator | 2026-01-05 01:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:26.349934 | orchestrator | 2026-01-05 01:37:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:26.350819 | orchestrator | 2026-01-05 01:37:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:26.350860 | orchestrator | 2026-01-05 01:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:29.399288 | orchestrator | 2026-01-05 01:37:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:29.400394 | orchestrator | 2026-01-05 01:37:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:29.400422 | orchestrator | 2026-01-05 01:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:32.449179 | orchestrator | 2026-01-05 01:37:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:32.451032 | orchestrator | 2026-01-05 01:37:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:32.451078 | orchestrator | 2026-01-05 01:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:35.493011 | orchestrator | 2026-01-05 01:37:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:35.495657 | orchestrator | 2026-01-05 01:37:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:35.495730 | orchestrator | 2026-01-05 01:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:38.555068 | orchestrator | 2026-01-05 01:37:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:38.556861 | orchestrator | 2026-01-05 01:37:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:38.556931 | orchestrator | 2026-01-05 01:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:41.606584 | orchestrator | 2026-01-05 01:37:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:41.608440 | orchestrator | 2026-01-05 01:37:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:41.608528 | orchestrator | 2026-01-05 01:37:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:44.655609 | orchestrator | 2026-01-05 01:37:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:44.656401 | orchestrator | 2026-01-05 01:37:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:44.656433 | orchestrator | 2026-01-05 01:37:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:47.704976 | orchestrator | 2026-01-05 01:37:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:47.707189 | orchestrator | 2026-01-05 01:37:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:47.707267 | orchestrator | 2026-01-05 01:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:50.753663 | orchestrator | 2026-01-05 01:37:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:50.756375 | orchestrator | 2026-01-05 01:37:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:50.756432 | orchestrator | 2026-01-05 01:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:53.807714 | orchestrator | 2026-01-05 01:37:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:53.808947 | orchestrator | 2026-01-05 01:37:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:53.809001 | orchestrator | 2026-01-05 01:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:56.861435 | orchestrator | 2026-01-05 01:37:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:56.863639 | orchestrator | 2026-01-05 01:37:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:56.863720 | orchestrator | 2026-01-05 01:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:59.910657 | orchestrator | 2026-01-05 01:37:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:37:59.912729 | orchestrator | 2026-01-05 01:37:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:37:59.912797 | orchestrator | 2026-01-05 01:37:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:02.967344 | orchestrator | 2026-01-05 01:38:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:02.970224 | orchestrator | 2026-01-05 01:38:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:02.970296 | orchestrator | 2026-01-05 01:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:06.019221 | orchestrator | 2026-01-05 01:38:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:06.021309 | orchestrator | 2026-01-05 01:38:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:06.021385 | orchestrator | 2026-01-05 01:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:09.085359 | orchestrator | 2026-01-05 01:38:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:09.085455 | orchestrator | 2026-01-05 01:38:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:09.085465 | orchestrator | 2026-01-05 01:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:12.134869 | orchestrator | 2026-01-05 01:38:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:12.136580 | orchestrator | 2026-01-05 01:38:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:12.136624 | orchestrator | 2026-01-05 01:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:15.190272 | orchestrator | 2026-01-05 01:38:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:15.190390 | orchestrator | 2026-01-05 01:38:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:15.190499 | orchestrator | 2026-01-05 01:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:18.233272 | orchestrator | 2026-01-05 01:38:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:18.234598 | orchestrator | 2026-01-05 01:38:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:18.234659 | orchestrator | 2026-01-05 01:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:21.285374 | orchestrator | 2026-01-05 01:38:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:21.286961 | orchestrator | 2026-01-05 01:38:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:21.287113 | orchestrator | 2026-01-05 01:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:24.338722 | orchestrator | 2026-01-05 01:38:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:24.341140 | orchestrator | 2026-01-05 01:38:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:24.341214 | orchestrator | 2026-01-05 01:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:27.395470 | orchestrator | 2026-01-05 01:38:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:27.397684 | orchestrator | 2026-01-05 01:38:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:27.397769 | orchestrator | 2026-01-05 01:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:30.446829 | orchestrator | 2026-01-05 01:38:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:30.447876 | orchestrator | 2026-01-05 01:38:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:30.448064 | orchestrator | 2026-01-05 01:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:33.505980 | orchestrator | 2026-01-05 01:38:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:33.507671 | orchestrator | 2026-01-05 01:38:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:33.507766 | orchestrator | 2026-01-05 01:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:36.564380 | orchestrator | 2026-01-05 01:38:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:36.565788 | orchestrator | 2026-01-05 01:38:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:36.565844 | orchestrator | 2026-01-05 01:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:39.615008 | orchestrator | 2026-01-05 01:38:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:39.615927 | orchestrator | 2026-01-05 01:38:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:39.615969 | orchestrator | 2026-01-05 01:38:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:42.655944 | orchestrator | 2026-01-05 01:38:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:42.658109 | orchestrator | 2026-01-05 01:38:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:42.658154 | orchestrator | 2026-01-05 01:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:45.702995 | orchestrator | 2026-01-05 01:38:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:45.703706 | orchestrator | 2026-01-05 01:38:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:45.703772 | orchestrator | 2026-01-05 01:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:48.783797 | orchestrator | 2026-01-05 01:38:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:48.783935 | orchestrator | 2026-01-05 01:38:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:48.783958 | orchestrator | 2026-01-05 01:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:51.839582 | orchestrator | 2026-01-05 01:38:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:51.841552 | orchestrator | 2026-01-05 01:38:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:51.843122 | orchestrator | 2026-01-05 01:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:54.883929 | orchestrator | 2026-01-05 01:38:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:54.887535 | orchestrator | 2026-01-05 01:38:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:54.887627 | orchestrator | 2026-01-05 01:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:57.935869 | orchestrator | 2026-01-05 01:38:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:38:57.937598 | orchestrator | 2026-01-05 01:38:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:38:57.937643 | orchestrator | 2026-01-05 01:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:00.980951 | orchestrator | 2026-01-05 01:39:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:00.983992 | orchestrator | 2026-01-05 01:39:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:00.984066 | orchestrator | 2026-01-05 01:39:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:04.037733 | orchestrator | 2026-01-05 01:39:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:04.040169 | orchestrator | 2026-01-05 01:39:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:04.040258 | orchestrator | 2026-01-05 01:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:07.083908 | orchestrator | 2026-01-05 01:39:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:07.087717 | orchestrator | 2026-01-05 01:39:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:07.087807 | orchestrator | 2026-01-05 01:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:10.140043 | orchestrator | 2026-01-05 01:39:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:10.142326 | orchestrator | 2026-01-05 01:39:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:10.142401 | orchestrator | 2026-01-05 01:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:13.190763 | orchestrator | 2026-01-05 01:39:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:13.193086 | orchestrator | 2026-01-05 01:39:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:13.193145 | orchestrator | 2026-01-05 01:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:16.245330 | orchestrator | 2026-01-05 01:39:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:16.247534 | orchestrator | 2026-01-05 01:39:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:16.247889 | orchestrator | 2026-01-05 01:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:19.294912 | orchestrator | 2026-01-05 01:39:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:19.297586 | orchestrator | 2026-01-05 01:39:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:19.297685 | orchestrator | 2026-01-05 01:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:22.358856 | orchestrator | 2026-01-05 01:39:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:22.361370 | orchestrator | 2026-01-05 01:39:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:22.361476 | orchestrator | 2026-01-05 01:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:25.402540 | orchestrator | 2026-01-05 01:39:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:25.407174 | orchestrator | 2026-01-05 01:39:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:25.407328 | orchestrator | 2026-01-05 01:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:28.459887 | orchestrator | 2026-01-05 01:39:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:28.463018 | orchestrator | 2026-01-05 01:39:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:28.463093 | orchestrator | 2026-01-05 01:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:31.508348 | orchestrator | 2026-01-05 01:39:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:31.511221 | orchestrator | 2026-01-05 01:39:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:31.511451 | orchestrator | 2026-01-05 01:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:34.558898 | orchestrator | 2026-01-05 01:39:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:34.560721 | orchestrator | 2026-01-05 01:39:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:34.560778 | orchestrator | 2026-01-05 01:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:37.607415 | orchestrator | 2026-01-05 01:39:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:37.609506 | orchestrator | 2026-01-05 01:39:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:37.609540 | orchestrator | 2026-01-05 01:39:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:40.654475 | orchestrator | 2026-01-05 01:39:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:40.656862 | orchestrator | 2026-01-05 01:39:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:40.657111 | orchestrator | 2026-01-05 01:39:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:43.703602 | orchestrator | 2026-01-05 01:39:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:43.706500 | orchestrator | 2026-01-05 01:39:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:43.706709 | orchestrator | 2026-01-05 01:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:46.753596 | orchestrator | 2026-01-05 01:39:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:46.755358 | orchestrator | 2026-01-05 01:39:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:46.755413 | orchestrator | 2026-01-05 01:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:49.796556 | orchestrator | 2026-01-05 01:39:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:49.798160 | orchestrator | 2026-01-05 01:39:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:49.798422 | orchestrator | 2026-01-05 01:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:52.840923 | orchestrator | 2026-01-05 01:39:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:52.843452 | orchestrator | 2026-01-05 01:39:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:52.843528 | orchestrator | 2026-01-05 01:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:55.891047 | orchestrator | 2026-01-05 01:39:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:55.893111 | orchestrator | 2026-01-05 01:39:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:55.893180 | orchestrator | 2026-01-05 01:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:58.944744 | orchestrator | 2026-01-05 01:39:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:39:58.947744 | orchestrator | 2026-01-05 01:39:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:39:58.947808 | orchestrator | 2026-01-05 01:39:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:01.999515 | orchestrator | 2026-01-05 01:40:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:02.002228 | orchestrator | 2026-01-05 01:40:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:02.002344 | orchestrator | 2026-01-05 01:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:05.067872 | orchestrator | 2026-01-05 01:40:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:05.071437 | orchestrator | 2026-01-05 01:40:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:05.071920 | orchestrator | 2026-01-05 01:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:08.126258 | orchestrator | 2026-01-05 01:40:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:08.127584 | orchestrator | 2026-01-05 01:40:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:08.127624 | orchestrator | 2026-01-05 01:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:11.179382 | orchestrator | 2026-01-05 01:40:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:11.180921 | orchestrator | 2026-01-05 01:40:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:11.181344 | orchestrator | 2026-01-05 01:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:14.230975 | orchestrator | 2026-01-05 01:40:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:14.232816 | orchestrator | 2026-01-05 01:40:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:14.232936 | orchestrator | 2026-01-05 01:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:17.272566 | orchestrator | 2026-01-05 01:40:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:17.273951 | orchestrator | 2026-01-05 01:40:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:17.274061 | orchestrator | 2026-01-05 01:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:20.321380 | orchestrator | 2026-01-05 01:40:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:20.322070 | orchestrator | 2026-01-05 01:40:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:20.322116 | orchestrator | 2026-01-05 01:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:23.372984 | orchestrator | 2026-01-05 01:40:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:23.375117 | orchestrator | 2026-01-05 01:40:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:23.375419 | orchestrator | 2026-01-05 01:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:26.423968 | orchestrator | 2026-01-05 01:40:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:26.425991 | orchestrator | 2026-01-05 01:40:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:26.426119 | orchestrator | 2026-01-05 01:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:29.473868 | orchestrator | 2026-01-05 01:40:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:29.475706 | orchestrator | 2026-01-05 01:40:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:29.475759 | orchestrator | 2026-01-05 01:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:32.523028 | orchestrator | 2026-01-05 01:40:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:32.523245 | orchestrator | 2026-01-05 01:40:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:32.523268 | orchestrator | 2026-01-05 01:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:35.567762 | orchestrator | 2026-01-05 01:40:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:35.570001 | orchestrator | 2026-01-05 01:40:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:35.570088 | orchestrator | 2026-01-05 01:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:38.616457 | orchestrator | 2026-01-05 01:40:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:38.620141 | orchestrator | 2026-01-05 01:40:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:38.620202 | orchestrator | 2026-01-05 01:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:41.671240 | orchestrator | 2026-01-05 01:40:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:41.673938 | orchestrator | 2026-01-05 01:40:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:41.674141 | orchestrator | 2026-01-05 01:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:44.724102 | orchestrator | 2026-01-05 01:40:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:44.726487 | orchestrator | 2026-01-05 01:40:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:44.726512 | orchestrator | 2026-01-05 01:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:47.775754 | orchestrator | 2026-01-05 01:40:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:47.777894 | orchestrator | 2026-01-05 01:40:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:47.777934 | orchestrator | 2026-01-05 01:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:50.831634 | orchestrator | 2026-01-05 01:40:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:50.834081 | orchestrator | 2026-01-05 01:40:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:50.834160 | orchestrator | 2026-01-05 01:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:53.879421 | orchestrator | 2026-01-05 01:40:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:53.885179 | orchestrator | 2026-01-05 01:40:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:53.885249 | orchestrator | 2026-01-05 01:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:56.934403 | orchestrator | 2026-01-05 01:40:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:56.935061 | orchestrator | 2026-01-05 01:40:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:56.935260 | orchestrator | 2026-01-05 01:40:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:59.988230 | orchestrator | 2026-01-05 01:40:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:40:59.992573 | orchestrator | 2026-01-05 01:40:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:40:59.992657 | orchestrator | 2026-01-05 01:40:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:03.043923 | orchestrator | 2026-01-05 01:41:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:03.045377 | orchestrator | 2026-01-05 01:41:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:03.045669 | orchestrator | 2026-01-05 01:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:06.089481 | orchestrator | 2026-01-05 01:41:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:06.090570 | orchestrator | 2026-01-05 01:41:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:06.090609 | orchestrator | 2026-01-05 01:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:09.134914 | orchestrator | 2026-01-05 01:41:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:09.136600 | orchestrator | 2026-01-05 01:41:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:09.136649 | orchestrator | 2026-01-05 01:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:12.183372 | orchestrator | 2026-01-05 01:41:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:12.184688 | orchestrator | 2026-01-05 01:41:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:12.184748 | orchestrator | 2026-01-05 01:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:15.230679 | orchestrator | 2026-01-05 01:41:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:15.232373 | orchestrator | 2026-01-05 01:41:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:15.232429 | orchestrator | 2026-01-05 01:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:18.281045 | orchestrator | 2026-01-05 01:41:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:18.281936 | orchestrator | 2026-01-05 01:41:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:18.282000 | orchestrator | 2026-01-05 01:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:21.335932 | orchestrator | 2026-01-05 01:41:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:21.337759 | orchestrator | 2026-01-05 01:41:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:21.337792 | orchestrator | 2026-01-05 01:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:24.387812 | orchestrator | 2026-01-05 01:41:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:24.389562 | orchestrator | 2026-01-05 01:41:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:24.389611 | orchestrator | 2026-01-05 01:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:27.445322 | orchestrator | 2026-01-05 01:41:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:27.446712 | orchestrator | 2026-01-05 01:41:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:27.446874 | orchestrator | 2026-01-05 01:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:30.497312 | orchestrator | 2026-01-05 01:41:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:30.499015 | orchestrator | 2026-01-05 01:41:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:30.499122 | orchestrator | 2026-01-05 01:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:33.555528 | orchestrator | 2026-01-05 01:41:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:33.557190 | orchestrator | 2026-01-05 01:41:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:33.557252 | orchestrator | 2026-01-05 01:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:36.604897 | orchestrator | 2026-01-05 01:41:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:36.606629 | orchestrator | 2026-01-05 01:41:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:36.606678 | orchestrator | 2026-01-05 01:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:39.656727 | orchestrator | 2026-01-05 01:41:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:39.658645 | orchestrator | 2026-01-05 01:41:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:39.658720 | orchestrator | 2026-01-05 01:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:42.700082 | orchestrator | 2026-01-05 01:41:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:42.701865 | orchestrator | 2026-01-05 01:41:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:42.701924 | orchestrator | 2026-01-05 01:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:45.756442 | orchestrator | 2026-01-05 01:41:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:45.757771 | orchestrator | 2026-01-05 01:41:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:45.757831 | orchestrator | 2026-01-05 01:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:48.809421 | orchestrator | 2026-01-05 01:41:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:48.810720 | orchestrator | 2026-01-05 01:41:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:48.810909 | orchestrator | 2026-01-05 01:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:51.862514 | orchestrator | 2026-01-05 01:41:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:51.863752 | orchestrator | 2026-01-05 01:41:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:51.864288 | orchestrator | 2026-01-05 01:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:54.909530 | orchestrator | 2026-01-05 01:41:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:54.911156 | orchestrator | 2026-01-05 01:41:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:54.911263 | orchestrator | 2026-01-05 01:41:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:57.964462 | orchestrator | 2026-01-05 01:41:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:41:57.968867 | orchestrator | 2026-01-05 01:41:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:41:57.968955 | orchestrator | 2026-01-05 01:41:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:01.023245 | orchestrator | 2026-01-05 01:42:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:01.024636 | orchestrator | 2026-01-05 01:42:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:01.025356 | orchestrator | 2026-01-05 01:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:04.066699 | orchestrator | 2026-01-05 01:42:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:04.067979 | orchestrator | 2026-01-05 01:42:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:04.068028 | orchestrator | 2026-01-05 01:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:07.117144 | orchestrator | 2026-01-05 01:42:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:07.119007 | orchestrator | 2026-01-05 01:42:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:07.119059 | orchestrator | 2026-01-05 01:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:10.158578 | orchestrator | 2026-01-05 01:42:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:10.162427 | orchestrator | 2026-01-05 01:42:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:10.162513 | orchestrator | 2026-01-05 01:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:13.213023 | orchestrator | 2026-01-05 01:42:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:13.215484 | orchestrator | 2026-01-05 01:42:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:13.215542 | orchestrator | 2026-01-05 01:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:16.262641 | orchestrator | 2026-01-05 01:42:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:16.266551 | orchestrator | 2026-01-05 01:42:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:16.266610 | orchestrator | 2026-01-05 01:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:19.320446 | orchestrator | 2026-01-05 01:42:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:19.321959 | orchestrator | 2026-01-05 01:42:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:19.322110 | orchestrator | 2026-01-05 01:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:22.367209 | orchestrator | 2026-01-05 01:42:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:22.369182 | orchestrator | 2026-01-05 01:42:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:22.369240 | orchestrator | 2026-01-05 01:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:25.412317 | orchestrator | 2026-01-05 01:42:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:25.413329 | orchestrator | 2026-01-05 01:42:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:25.413391 | orchestrator | 2026-01-05 01:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:28.466134 | orchestrator | 2026-01-05 01:42:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:28.467512 | orchestrator | 2026-01-05 01:42:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:28.467603 | orchestrator | 2026-01-05 01:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:31.515375 | orchestrator | 2026-01-05 01:42:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:31.518160 | orchestrator | 2026-01-05 01:42:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:31.518247 | orchestrator | 2026-01-05 01:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:34.567811 | orchestrator | 2026-01-05 01:42:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:34.568915 | orchestrator | 2026-01-05 01:42:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:34.569017 | orchestrator | 2026-01-05 01:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:37.622455 | orchestrator | 2026-01-05 01:42:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:37.623459 | orchestrator | 2026-01-05 01:42:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:37.623509 | orchestrator | 2026-01-05 01:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:40.676587 | orchestrator | 2026-01-05 01:42:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:40.679665 | orchestrator | 2026-01-05 01:42:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:40.679718 | orchestrator | 2026-01-05 01:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:43.728566 | orchestrator | 2026-01-05 01:42:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:43.730778 | orchestrator | 2026-01-05 01:42:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:43.730847 | orchestrator | 2026-01-05 01:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:46.783873 | orchestrator | 2026-01-05 01:42:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:46.785142 | orchestrator | 2026-01-05 01:42:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:46.785509 | orchestrator | 2026-01-05 01:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:49.836578 | orchestrator | 2026-01-05 01:42:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:49.838831 | orchestrator | 2026-01-05 01:42:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:49.838937 | orchestrator | 2026-01-05 01:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:52.882097 | orchestrator | 2026-01-05 01:42:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:52.884072 | orchestrator | 2026-01-05 01:42:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:52.884186 | orchestrator | 2026-01-05 01:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:55.928819 | orchestrator | 2026-01-05 01:42:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:55.930986 | orchestrator | 2026-01-05 01:42:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:55.931052 | orchestrator | 2026-01-05 01:42:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:58.982011 | orchestrator | 2026-01-05 01:42:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:42:58.982390 | orchestrator | 2026-01-05 01:42:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:42:58.982502 | orchestrator | 2026-01-05 01:42:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:02.029844 | orchestrator | 2026-01-05 01:43:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:02.032065 | orchestrator | 2026-01-05 01:43:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:02.032120 | orchestrator | 2026-01-05 01:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:05.079686 | orchestrator | 2026-01-05 01:43:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:05.081687 | orchestrator | 2026-01-05 01:43:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:05.081762 | orchestrator | 2026-01-05 01:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:08.118839 | orchestrator | 2026-01-05 01:43:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:08.119246 | orchestrator | 2026-01-05 01:43:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:08.119282 | orchestrator | 2026-01-05 01:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:11.155930 | orchestrator | 2026-01-05 01:43:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:11.158630 | orchestrator | 2026-01-05 01:43:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:11.158688 | orchestrator | 2026-01-05 01:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:14.200394 | orchestrator | 2026-01-05 01:43:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:14.202247 | orchestrator | 2026-01-05 01:43:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:14.202306 | orchestrator | 2026-01-05 01:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:17.260874 | orchestrator | 2026-01-05 01:43:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:17.262401 | orchestrator | 2026-01-05 01:43:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:17.262584 | orchestrator | 2026-01-05 01:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:20.310561 | orchestrator | 2026-01-05 01:43:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:20.313095 | orchestrator | 2026-01-05 01:43:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:20.313163 | orchestrator | 2026-01-05 01:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:23.365093 | orchestrator | 2026-01-05 01:43:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:23.368144 | orchestrator | 2026-01-05 01:43:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:23.368261 | orchestrator | 2026-01-05 01:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:26.427203 | orchestrator | 2026-01-05 01:43:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:26.429751 | orchestrator | 2026-01-05 01:43:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:26.429873 | orchestrator | 2026-01-05 01:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:29.493710 | orchestrator | 2026-01-05 01:43:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:29.495089 | orchestrator | 2026-01-05 01:43:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:29.495137 | orchestrator | 2026-01-05 01:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:32.549460 | orchestrator | 2026-01-05 01:43:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:32.551855 | orchestrator | 2026-01-05 01:43:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:32.551921 | orchestrator | 2026-01-05 01:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:35.596649 | orchestrator | 2026-01-05 01:43:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:35.599087 | orchestrator | 2026-01-05 01:43:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:35.599154 | orchestrator | 2026-01-05 01:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:38.648145 | orchestrator | 2026-01-05 01:43:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:38.649944 | orchestrator | 2026-01-05 01:43:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:38.650076 | orchestrator | 2026-01-05 01:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:41.703842 | orchestrator | 2026-01-05 01:43:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:41.705675 | orchestrator | 2026-01-05 01:43:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:41.705718 | orchestrator | 2026-01-05 01:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:44.756960 | orchestrator | 2026-01-05 01:43:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:44.758351 | orchestrator | 2026-01-05 01:43:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:44.758361 | orchestrator | 2026-01-05 01:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:47.806726 | orchestrator | 2026-01-05 01:43:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:47.809259 | orchestrator | 2026-01-05 01:43:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:47.809343 | orchestrator | 2026-01-05 01:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:50.855651 | orchestrator | 2026-01-05 01:43:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:50.855831 | orchestrator | 2026-01-05 01:43:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:50.855840 | orchestrator | 2026-01-05 01:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:53.900074 | orchestrator | 2026-01-05 01:43:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:53.900245 | orchestrator | 2026-01-05 01:43:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:53.900258 | orchestrator | 2026-01-05 01:43:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:56.952544 | orchestrator | 2026-01-05 01:43:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:43:56.954720 | orchestrator | 2026-01-05 01:43:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:43:56.954793 | orchestrator | 2026-01-05 01:43:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:00.003683 | orchestrator | 2026-01-05 01:44:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:00.005526 | orchestrator | 2026-01-05 01:44:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:00.005590 | orchestrator | 2026-01-05 01:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:03.057762 | orchestrator | 2026-01-05 01:44:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:03.058153 | orchestrator | 2026-01-05 01:44:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:03.058185 | orchestrator | 2026-01-05 01:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:06.111962 | orchestrator | 2026-01-05 01:44:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:06.112874 | orchestrator | 2026-01-05 01:44:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:06.112930 | orchestrator | 2026-01-05 01:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:09.161915 | orchestrator | 2026-01-05 01:44:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:09.164789 | orchestrator | 2026-01-05 01:44:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:09.164850 | orchestrator | 2026-01-05 01:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:12.220100 | orchestrator | 2026-01-05 01:44:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:12.220766 | orchestrator | 2026-01-05 01:44:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:12.220802 | orchestrator | 2026-01-05 01:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:15.264556 | orchestrator | 2026-01-05 01:44:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:15.265493 | orchestrator | 2026-01-05 01:44:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:15.265545 | orchestrator | 2026-01-05 01:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:18.321998 | orchestrator | 2026-01-05 01:44:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:18.323443 | orchestrator | 2026-01-05 01:44:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:18.323612 | orchestrator | 2026-01-05 01:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:21.363923 | orchestrator | 2026-01-05 01:44:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:21.364814 | orchestrator | 2026-01-05 01:44:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:21.364859 | orchestrator | 2026-01-05 01:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:24.409009 | orchestrator | 2026-01-05 01:44:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:24.411463 | orchestrator | 2026-01-05 01:44:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:24.411602 | orchestrator | 2026-01-05 01:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:27.456837 | orchestrator | 2026-01-05 01:44:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:27.458997 | orchestrator | 2026-01-05 01:44:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:27.459044 | orchestrator | 2026-01-05 01:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:30.515802 | orchestrator | 2026-01-05 01:44:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:30.518119 | orchestrator | 2026-01-05 01:44:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:30.518191 | orchestrator | 2026-01-05 01:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:33.574512 | orchestrator | 2026-01-05 01:44:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:33.576589 | orchestrator | 2026-01-05 01:44:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:33.576623 | orchestrator | 2026-01-05 01:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:36.626229 | orchestrator | 2026-01-05 01:44:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:36.627655 | orchestrator | 2026-01-05 01:44:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:36.627784 | orchestrator | 2026-01-05 01:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:39.679339 | orchestrator | 2026-01-05 01:44:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:39.681473 | orchestrator | 2026-01-05 01:44:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:39.682748 | orchestrator | 2026-01-05 01:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:42.731944 | orchestrator | 2026-01-05 01:44:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:42.732869 | orchestrator | 2026-01-05 01:44:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:42.732913 | orchestrator | 2026-01-05 01:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:45.785258 | orchestrator | 2026-01-05 01:44:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:45.788118 | orchestrator | 2026-01-05 01:44:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:45.788188 | orchestrator | 2026-01-05 01:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:48.848939 | orchestrator | 2026-01-05 01:44:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:48.851003 | orchestrator | 2026-01-05 01:44:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:48.851080 | orchestrator | 2026-01-05 01:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:51.901395 | orchestrator | 2026-01-05 01:44:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:51.902523 | orchestrator | 2026-01-05 01:44:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:51.902581 | orchestrator | 2026-01-05 01:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:54.957466 | orchestrator | 2026-01-05 01:44:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:54.959635 | orchestrator | 2026-01-05 01:44:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:54.959714 | orchestrator | 2026-01-05 01:44:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:58.016085 | orchestrator | 2026-01-05 01:44:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:44:58.021440 | orchestrator | 2026-01-05 01:44:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:44:58.021546 | orchestrator | 2026-01-05 01:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:01.068591 | orchestrator | 2026-01-05 01:45:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:01.070247 | orchestrator | 2026-01-05 01:45:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:01.070279 | orchestrator | 2026-01-05 01:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:04.119653 | orchestrator | 2026-01-05 01:45:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:04.122956 | orchestrator | 2026-01-05 01:45:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:04.123025 | orchestrator | 2026-01-05 01:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:07.179148 | orchestrator | 2026-01-05 01:45:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:07.180807 | orchestrator | 2026-01-05 01:45:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:07.180843 | orchestrator | 2026-01-05 01:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:10.231225 | orchestrator | 2026-01-05 01:45:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:10.232810 | orchestrator | 2026-01-05 01:45:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:10.232863 | orchestrator | 2026-01-05 01:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:13.284346 | orchestrator | 2026-01-05 01:45:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:13.286793 | orchestrator | 2026-01-05 01:45:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:13.286855 | orchestrator | 2026-01-05 01:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:16.333327 | orchestrator | 2026-01-05 01:45:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:16.334298 | orchestrator | 2026-01-05 01:45:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:16.334404 | orchestrator | 2026-01-05 01:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:19.390391 | orchestrator | 2026-01-05 01:45:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:19.393315 | orchestrator | 2026-01-05 01:45:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:19.393377 | orchestrator | 2026-01-05 01:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:22.443509 | orchestrator | 2026-01-05 01:45:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:22.447663 | orchestrator | 2026-01-05 01:45:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:22.447718 | orchestrator | 2026-01-05 01:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:25.499626 | orchestrator | 2026-01-05 01:45:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:25.500795 | orchestrator | 2026-01-05 01:45:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:25.500841 | orchestrator | 2026-01-05 01:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:28.552430 | orchestrator | 2026-01-05 01:45:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:28.554199 | orchestrator | 2026-01-05 01:45:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:28.554251 | orchestrator | 2026-01-05 01:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:31.604770 | orchestrator | 2026-01-05 01:45:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:31.606239 | orchestrator | 2026-01-05 01:45:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:31.606505 | orchestrator | 2026-01-05 01:45:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:34.659424 | orchestrator | 2026-01-05 01:45:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:34.661308 | orchestrator | 2026-01-05 01:45:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:34.661362 | orchestrator | 2026-01-05 01:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:37.709623 | orchestrator | 2026-01-05 01:45:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:37.711014 | orchestrator | 2026-01-05 01:45:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:37.711067 | orchestrator | 2026-01-05 01:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:40.758260 | orchestrator | 2026-01-05 01:45:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:40.760642 | orchestrator | 2026-01-05 01:45:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:40.760717 | orchestrator | 2026-01-05 01:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:43.815505 | orchestrator | 2026-01-05 01:45:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:43.818158 | orchestrator | 2026-01-05 01:45:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:43.818233 | orchestrator | 2026-01-05 01:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:46.862447 | orchestrator | 2026-01-05 01:45:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:46.865694 | orchestrator | 2026-01-05 01:45:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:46.865818 | orchestrator | 2026-01-05 01:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:49.915639 | orchestrator | 2026-01-05 01:45:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:49.917224 | orchestrator | 2026-01-05 01:45:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:49.917286 | orchestrator | 2026-01-05 01:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:52.968010 | orchestrator | 2026-01-05 01:45:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:52.969653 | orchestrator | 2026-01-05 01:45:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:52.969688 | orchestrator | 2026-01-05 01:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:56.012743 | orchestrator | 2026-01-05 01:45:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:56.014152 | orchestrator | 2026-01-05 01:45:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:56.014199 | orchestrator | 2026-01-05 01:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:59.066388 | orchestrator | 2026-01-05 01:45:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:45:59.068531 | orchestrator | 2026-01-05 01:45:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:45:59.068577 | orchestrator | 2026-01-05 01:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:02.120895 | orchestrator | 2026-01-05 01:46:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:02.122077 | orchestrator | 2026-01-05 01:46:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:02.122191 | orchestrator | 2026-01-05 01:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:05.171602 | orchestrator | 2026-01-05 01:46:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:05.173259 | orchestrator | 2026-01-05 01:46:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:05.173308 | orchestrator | 2026-01-05 01:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:08.219075 | orchestrator | 2026-01-05 01:46:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:08.220292 | orchestrator | 2026-01-05 01:46:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:08.220658 | orchestrator | 2026-01-05 01:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:11.275085 | orchestrator | 2026-01-05 01:46:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:11.277079 | orchestrator | 2026-01-05 01:46:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:11.277248 | orchestrator | 2026-01-05 01:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:14.325413 | orchestrator | 2026-01-05 01:46:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:14.328092 | orchestrator | 2026-01-05 01:46:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:14.328254 | orchestrator | 2026-01-05 01:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:17.374443 | orchestrator | 2026-01-05 01:46:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:17.377081 | orchestrator | 2026-01-05 01:46:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:17.377180 | orchestrator | 2026-01-05 01:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:20.424891 | orchestrator | 2026-01-05 01:46:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:20.427100 | orchestrator | 2026-01-05 01:46:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:20.427189 | orchestrator | 2026-01-05 01:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:23.480757 | orchestrator | 2026-01-05 01:46:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:23.482238 | orchestrator | 2026-01-05 01:46:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:23.482287 | orchestrator | 2026-01-05 01:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:26.528150 | orchestrator | 2026-01-05 01:46:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:26.529157 | orchestrator | 2026-01-05 01:46:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:26.529285 | orchestrator | 2026-01-05 01:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:29.575509 | orchestrator | 2026-01-05 01:46:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:29.578686 | orchestrator | 2026-01-05 01:46:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:29.578769 | orchestrator | 2026-01-05 01:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:32.620910 | orchestrator | 2026-01-05 01:46:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:32.622933 | orchestrator | 2026-01-05 01:46:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:32.622997 | orchestrator | 2026-01-05 01:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:35.665022 | orchestrator | 2026-01-05 01:46:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:35.666828 | orchestrator | 2026-01-05 01:46:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:35.666900 | orchestrator | 2026-01-05 01:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:38.722400 | orchestrator | 2026-01-05 01:46:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:38.723360 | orchestrator | 2026-01-05 01:46:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:38.723411 | orchestrator | 2026-01-05 01:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:41.766608 | orchestrator | 2026-01-05 01:46:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:41.767706 | orchestrator | 2026-01-05 01:46:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:41.767751 | orchestrator | 2026-01-05 01:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:44.809251 | orchestrator | 2026-01-05 01:46:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:44.811771 | orchestrator | 2026-01-05 01:46:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:44.811827 | orchestrator | 2026-01-05 01:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:47.850878 | orchestrator | 2026-01-05 01:46:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:47.852877 | orchestrator | 2026-01-05 01:46:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:47.852984 | orchestrator | 2026-01-05 01:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:50.905920 | orchestrator | 2026-01-05 01:46:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:50.906911 | orchestrator | 2026-01-05 01:46:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:50.906957 | orchestrator | 2026-01-05 01:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:53.966227 | orchestrator | 2026-01-05 01:46:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:53.968336 | orchestrator | 2026-01-05 01:46:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:53.968430 | orchestrator | 2026-01-05 01:46:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:57.020225 | orchestrator | 2026-01-05 01:46:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:46:57.021432 | orchestrator | 2026-01-05 01:46:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:46:57.021509 | orchestrator | 2026-01-05 01:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:00.067512 | orchestrator | 2026-01-05 01:47:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:00.070185 | orchestrator | 2026-01-05 01:47:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:00.070278 | orchestrator | 2026-01-05 01:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:03.124915 | orchestrator | 2026-01-05 01:47:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:03.127018 | orchestrator | 2026-01-05 01:47:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:03.127099 | orchestrator | 2026-01-05 01:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:06.179345 | orchestrator | 2026-01-05 01:47:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:06.182494 | orchestrator | 2026-01-05 01:47:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:06.182566 | orchestrator | 2026-01-05 01:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:09.230580 | orchestrator | 2026-01-05 01:47:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:09.233957 | orchestrator | 2026-01-05 01:47:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:09.234089 | orchestrator | 2026-01-05 01:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:12.275905 | orchestrator | 2026-01-05 01:47:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:12.277411 | orchestrator | 2026-01-05 01:47:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:12.277474 | orchestrator | 2026-01-05 01:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:15.320110 | orchestrator | 2026-01-05 01:47:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:15.322321 | orchestrator | 2026-01-05 01:47:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:15.322398 | orchestrator | 2026-01-05 01:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:18.370247 | orchestrator | 2026-01-05 01:47:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:18.371490 | orchestrator | 2026-01-05 01:47:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:18.371532 | orchestrator | 2026-01-05 01:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:21.411654 | orchestrator | 2026-01-05 01:47:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:21.412992 | orchestrator | 2026-01-05 01:47:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:21.413080 | orchestrator | 2026-01-05 01:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:24.462738 | orchestrator | 2026-01-05 01:47:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:24.464725 | orchestrator | 2026-01-05 01:47:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:24.464838 | orchestrator | 2026-01-05 01:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:27.508683 | orchestrator | 2026-01-05 01:47:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:27.510243 | orchestrator | 2026-01-05 01:47:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:27.510348 | orchestrator | 2026-01-05 01:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:30.558680 | orchestrator | 2026-01-05 01:47:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:30.560201 | orchestrator | 2026-01-05 01:47:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:30.560876 | orchestrator | 2026-01-05 01:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:33.612370 | orchestrator | 2026-01-05 01:47:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:33.614193 | orchestrator | 2026-01-05 01:47:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:33.614253 | orchestrator | 2026-01-05 01:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:36.658181 | orchestrator | 2026-01-05 01:47:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:36.659546 | orchestrator | 2026-01-05 01:47:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:36.659582 | orchestrator | 2026-01-05 01:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:39.706406 | orchestrator | 2026-01-05 01:47:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:39.708506 | orchestrator | 2026-01-05 01:47:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:39.708558 | orchestrator | 2026-01-05 01:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:42.761297 | orchestrator | 2026-01-05 01:47:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:42.763738 | orchestrator | 2026-01-05 01:47:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:42.764299 | orchestrator | 2026-01-05 01:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:45.811312 | orchestrator | 2026-01-05 01:47:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:45.813023 | orchestrator | 2026-01-05 01:47:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:45.813091 | orchestrator | 2026-01-05 01:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:48.862979 | orchestrator | 2026-01-05 01:47:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:48.864726 | orchestrator | 2026-01-05 01:47:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:48.865009 | orchestrator | 2026-01-05 01:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:51.909887 | orchestrator | 2026-01-05 01:47:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:51.912086 | orchestrator | 2026-01-05 01:47:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:51.912153 | orchestrator | 2026-01-05 01:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:54.961038 | orchestrator | 2026-01-05 01:47:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:54.964297 | orchestrator | 2026-01-05 01:47:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:54.964402 | orchestrator | 2026-01-05 01:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:58.022115 | orchestrator | 2026-01-05 01:47:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:47:58.023382 | orchestrator | 2026-01-05 01:47:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:47:58.023435 | orchestrator | 2026-01-05 01:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:01.067477 | orchestrator | 2026-01-05 01:48:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:01.070175 | orchestrator | 2026-01-05 01:48:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:01.070244 | orchestrator | 2026-01-05 01:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:04.118876 | orchestrator | 2026-01-05 01:48:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:04.121130 | orchestrator | 2026-01-05 01:48:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:04.121235 | orchestrator | 2026-01-05 01:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:07.165606 | orchestrator | 2026-01-05 01:48:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:07.167370 | orchestrator | 2026-01-05 01:48:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:07.167434 | orchestrator | 2026-01-05 01:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:10.222455 | orchestrator | 2026-01-05 01:48:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:10.222705 | orchestrator | 2026-01-05 01:48:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:10.223650 | orchestrator | 2026-01-05 01:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:13.279239 | orchestrator | 2026-01-05 01:48:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:13.281104 | orchestrator | 2026-01-05 01:48:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:13.281207 | orchestrator | 2026-01-05 01:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:16.332917 | orchestrator | 2026-01-05 01:48:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:16.335056 | orchestrator | 2026-01-05 01:48:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:16.335644 | orchestrator | 2026-01-05 01:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:19.385950 | orchestrator | 2026-01-05 01:48:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:19.388983 | orchestrator | 2026-01-05 01:48:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:19.389115 | orchestrator | 2026-01-05 01:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:22.432411 | orchestrator | 2026-01-05 01:48:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:22.433953 | orchestrator | 2026-01-05 01:48:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:22.434010 | orchestrator | 2026-01-05 01:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:25.478232 | orchestrator | 2026-01-05 01:48:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:25.480447 | orchestrator | 2026-01-05 01:48:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:25.480492 | orchestrator | 2026-01-05 01:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:28.530602 | orchestrator | 2026-01-05 01:48:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:28.532825 | orchestrator | 2026-01-05 01:48:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:28.532996 | orchestrator | 2026-01-05 01:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:31.574348 | orchestrator | 2026-01-05 01:48:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:31.577505 | orchestrator | 2026-01-05 01:48:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:31.577574 | orchestrator | 2026-01-05 01:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:34.623481 | orchestrator | 2026-01-05 01:48:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:34.624156 | orchestrator | 2026-01-05 01:48:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:34.624205 | orchestrator | 2026-01-05 01:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:37.675514 | orchestrator | 2026-01-05 01:48:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:37.676954 | orchestrator | 2026-01-05 01:48:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:37.677016 | orchestrator | 2026-01-05 01:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:40.724690 | orchestrator | 2026-01-05 01:48:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:40.729650 | orchestrator | 2026-01-05 01:48:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:40.729892 | orchestrator | 2026-01-05 01:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:43.779163 | orchestrator | 2026-01-05 01:48:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:43.780429 | orchestrator | 2026-01-05 01:48:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:43.780505 | orchestrator | 2026-01-05 01:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:46.832442 | orchestrator | 2026-01-05 01:48:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:46.834422 | orchestrator | 2026-01-05 01:48:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:46.834469 | orchestrator | 2026-01-05 01:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:49.884664 | orchestrator | 2026-01-05 01:48:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:49.887001 | orchestrator | 2026-01-05 01:48:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:49.887066 | orchestrator | 2026-01-05 01:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:52.931238 | orchestrator | 2026-01-05 01:48:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:52.933623 | orchestrator | 2026-01-05 01:48:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:52.933676 | orchestrator | 2026-01-05 01:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:55.984136 | orchestrator | 2026-01-05 01:48:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:55.984737 | orchestrator | 2026-01-05 01:48:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:55.984828 | orchestrator | 2026-01-05 01:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:59.045353 | orchestrator | 2026-01-05 01:48:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:48:59.046909 | orchestrator | 2026-01-05 01:48:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:48:59.046984 | orchestrator | 2026-01-05 01:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:02.098284 | orchestrator | 2026-01-05 01:49:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:02.099526 | orchestrator | 2026-01-05 01:49:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:02.099573 | orchestrator | 2026-01-05 01:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:05.141827 | orchestrator | 2026-01-05 01:49:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:05.143419 | orchestrator | 2026-01-05 01:49:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:05.143470 | orchestrator | 2026-01-05 01:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:08.183622 | orchestrator | 2026-01-05 01:49:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:08.184460 | orchestrator | 2026-01-05 01:49:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:08.184496 | orchestrator | 2026-01-05 01:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:11.225927 | orchestrator | 2026-01-05 01:49:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:11.228235 | orchestrator | 2026-01-05 01:49:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:11.228294 | orchestrator | 2026-01-05 01:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:14.276713 | orchestrator | 2026-01-05 01:49:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:14.280027 | orchestrator | 2026-01-05 01:49:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:14.280100 | orchestrator | 2026-01-05 01:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:17.325470 | orchestrator | 2026-01-05 01:49:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:17.328364 | orchestrator | 2026-01-05 01:49:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:17.328462 | orchestrator | 2026-01-05 01:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:20.373142 | orchestrator | 2026-01-05 01:49:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:20.374748 | orchestrator | 2026-01-05 01:49:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:20.374883 | orchestrator | 2026-01-05 01:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:23.423109 | orchestrator | 2026-01-05 01:49:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:23.425241 | orchestrator | 2026-01-05 01:49:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:23.425310 | orchestrator | 2026-01-05 01:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:26.471673 | orchestrator | 2026-01-05 01:49:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:26.472756 | orchestrator | 2026-01-05 01:49:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:26.472871 | orchestrator | 2026-01-05 01:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:29.517323 | orchestrator | 2026-01-05 01:49:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:29.518496 | orchestrator | 2026-01-05 01:49:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:29.518562 | orchestrator | 2026-01-05 01:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:32.568507 | orchestrator | 2026-01-05 01:49:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:32.570359 | orchestrator | 2026-01-05 01:49:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:32.570424 | orchestrator | 2026-01-05 01:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:35.618624 | orchestrator | 2026-01-05 01:49:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:35.621333 | orchestrator | 2026-01-05 01:49:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:35.621427 | orchestrator | 2026-01-05 01:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:38.671730 | orchestrator | 2026-01-05 01:49:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:38.675768 | orchestrator | 2026-01-05 01:49:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:38.675904 | orchestrator | 2026-01-05 01:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:41.723746 | orchestrator | 2026-01-05 01:49:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:41.724997 | orchestrator | 2026-01-05 01:49:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:41.725097 | orchestrator | 2026-01-05 01:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:44.776599 | orchestrator | 2026-01-05 01:49:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:44.778713 | orchestrator | 2026-01-05 01:49:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:44.779964 | orchestrator | 2026-01-05 01:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:47.849855 | orchestrator | 2026-01-05 01:49:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:47.851947 | orchestrator | 2026-01-05 01:49:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:47.851969 | orchestrator | 2026-01-05 01:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:50.900499 | orchestrator | 2026-01-05 01:49:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:50.901857 | orchestrator | 2026-01-05 01:49:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:50.901930 | orchestrator | 2026-01-05 01:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:53.948562 | orchestrator | 2026-01-05 01:49:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:53.950749 | orchestrator | 2026-01-05 01:49:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:53.950834 | orchestrator | 2026-01-05 01:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:57.000925 | orchestrator | 2026-01-05 01:49:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:49:57.003511 | orchestrator | 2026-01-05 01:49:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:49:57.003596 | orchestrator | 2026-01-05 01:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:00.064602 | orchestrator | 2026-01-05 01:50:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:00.065353 | orchestrator | 2026-01-05 01:50:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:00.065463 | orchestrator | 2026-01-05 01:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:03.121069 | orchestrator | 2026-01-05 01:50:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:03.123394 | orchestrator | 2026-01-05 01:50:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:03.123442 | orchestrator | 2026-01-05 01:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:06.171197 | orchestrator | 2026-01-05 01:50:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:06.173310 | orchestrator | 2026-01-05 01:50:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:06.173370 | orchestrator | 2026-01-05 01:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:09.225322 | orchestrator | 2026-01-05 01:50:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:09.227244 | orchestrator | 2026-01-05 01:50:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:09.227282 | orchestrator | 2026-01-05 01:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:12.274784 | orchestrator | 2026-01-05 01:50:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:12.277058 | orchestrator | 2026-01-05 01:50:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:12.277168 | orchestrator | 2026-01-05 01:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:15.327949 | orchestrator | 2026-01-05 01:50:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:15.329568 | orchestrator | 2026-01-05 01:50:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:15.329668 | orchestrator | 2026-01-05 01:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:18.376264 | orchestrator | 2026-01-05 01:50:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:18.377429 | orchestrator | 2026-01-05 01:50:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:18.377628 | orchestrator | 2026-01-05 01:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:21.424961 | orchestrator | 2026-01-05 01:50:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:21.426517 | orchestrator | 2026-01-05 01:50:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:21.426646 | orchestrator | 2026-01-05 01:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:24.488321 | orchestrator | 2026-01-05 01:50:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:24.491454 | orchestrator | 2026-01-05 01:50:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:24.491567 | orchestrator | 2026-01-05 01:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:27.554157 | orchestrator | 2026-01-05 01:50:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:27.556295 | orchestrator | 2026-01-05 01:50:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:27.556365 | orchestrator | 2026-01-05 01:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:30.595328 | orchestrator | 2026-01-05 01:50:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:30.596001 | orchestrator | 2026-01-05 01:50:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:30.596023 | orchestrator | 2026-01-05 01:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:33.643036 | orchestrator | 2026-01-05 01:50:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:33.644004 | orchestrator | 2026-01-05 01:50:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:33.644044 | orchestrator | 2026-01-05 01:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:36.697309 | orchestrator | 2026-01-05 01:50:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:36.701100 | orchestrator | 2026-01-05 01:50:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:36.701168 | orchestrator | 2026-01-05 01:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:39.749487 | orchestrator | 2026-01-05 01:50:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:39.751893 | orchestrator | 2026-01-05 01:50:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:39.752003 | orchestrator | 2026-01-05 01:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:42.798992 | orchestrator | 2026-01-05 01:50:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:42.800579 | orchestrator | 2026-01-05 01:50:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:42.800689 | orchestrator | 2026-01-05 01:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:45.852165 | orchestrator | 2026-01-05 01:50:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:45.854104 | orchestrator | 2026-01-05 01:50:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:45.854191 | orchestrator | 2026-01-05 01:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:48.902576 | orchestrator | 2026-01-05 01:50:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:48.905514 | orchestrator | 2026-01-05 01:50:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:48.905606 | orchestrator | 2026-01-05 01:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:51.955078 | orchestrator | 2026-01-05 01:50:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:51.956972 | orchestrator | 2026-01-05 01:50:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:51.957059 | orchestrator | 2026-01-05 01:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:55.004798 | orchestrator | 2026-01-05 01:50:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:55.006895 | orchestrator | 2026-01-05 01:50:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:55.006987 | orchestrator | 2026-01-05 01:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:58.058754 | orchestrator | 2026-01-05 01:50:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:50:58.059155 | orchestrator | 2026-01-05 01:50:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:50:58.059331 | orchestrator | 2026-01-05 01:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:01.100638 | orchestrator | 2026-01-05 01:51:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:01.102179 | orchestrator | 2026-01-05 01:51:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:01.102772 | orchestrator | 2026-01-05 01:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:04.151602 | orchestrator | 2026-01-05 01:51:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:04.154206 | orchestrator | 2026-01-05 01:51:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:04.155039 | orchestrator | 2026-01-05 01:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:07.204390 | orchestrator | 2026-01-05 01:51:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:07.205714 | orchestrator | 2026-01-05 01:51:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:07.205785 | orchestrator | 2026-01-05 01:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:10.258360 | orchestrator | 2026-01-05 01:51:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:10.259552 | orchestrator | 2026-01-05 01:51:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:10.259654 | orchestrator | 2026-01-05 01:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:13.304313 | orchestrator | 2026-01-05 01:51:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:13.304467 | orchestrator | 2026-01-05 01:51:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:13.304824 | orchestrator | 2026-01-05 01:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:16.346833 | orchestrator | 2026-01-05 01:51:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:16.348077 | orchestrator | 2026-01-05 01:51:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:16.348107 | orchestrator | 2026-01-05 01:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:19.393722 | orchestrator | 2026-01-05 01:51:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:19.394191 | orchestrator | 2026-01-05 01:51:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:19.394226 | orchestrator | 2026-01-05 01:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:22.436855 | orchestrator | 2026-01-05 01:51:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:22.438265 | orchestrator | 2026-01-05 01:51:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:22.438364 | orchestrator | 2026-01-05 01:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:25.482549 | orchestrator | 2026-01-05 01:51:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:25.484344 | orchestrator | 2026-01-05 01:51:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:25.484451 | orchestrator | 2026-01-05 01:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:28.530862 | orchestrator | 2026-01-05 01:51:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:28.533459 | orchestrator | 2026-01-05 01:51:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:28.533542 | orchestrator | 2026-01-05 01:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:31.575160 | orchestrator | 2026-01-05 01:51:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:31.575929 | orchestrator | 2026-01-05 01:51:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:31.576121 | orchestrator | 2026-01-05 01:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:34.616499 | orchestrator | 2026-01-05 01:51:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:34.619293 | orchestrator | 2026-01-05 01:51:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:34.619411 | orchestrator | 2026-01-05 01:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:37.667662 | orchestrator | 2026-01-05 01:51:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:37.670229 | orchestrator | 2026-01-05 01:51:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:37.670328 | orchestrator | 2026-01-05 01:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:40.723797 | orchestrator | 2026-01-05 01:51:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:40.726527 | orchestrator | 2026-01-05 01:51:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:40.726666 | orchestrator | 2026-01-05 01:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:43.770918 | orchestrator | 2026-01-05 01:51:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:43.773076 | orchestrator | 2026-01-05 01:51:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:43.773177 | orchestrator | 2026-01-05 01:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:46.822702 | orchestrator | 2026-01-05 01:51:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:46.826627 | orchestrator | 2026-01-05 01:51:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:46.826686 | orchestrator | 2026-01-05 01:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:49.879626 | orchestrator | 2026-01-05 01:51:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:49.881805 | orchestrator | 2026-01-05 01:51:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:49.881932 | orchestrator | 2026-01-05 01:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:52.928910 | orchestrator | 2026-01-05 01:51:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:52.930725 | orchestrator | 2026-01-05 01:51:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:52.930796 | orchestrator | 2026-01-05 01:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:55.995873 | orchestrator | 2026-01-05 01:51:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:55.997268 | orchestrator | 2026-01-05 01:51:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:55.997331 | orchestrator | 2026-01-05 01:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:59.055314 | orchestrator | 2026-01-05 01:51:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:51:59.058153 | orchestrator | 2026-01-05 01:51:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:51:59.058221 | orchestrator | 2026-01-05 01:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:02.100992 | orchestrator | 2026-01-05 01:52:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:02.103496 | orchestrator | 2026-01-05 01:52:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:02.103583 | orchestrator | 2026-01-05 01:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:05.148516 | orchestrator | 2026-01-05 01:52:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:05.152178 | orchestrator | 2026-01-05 01:52:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:05.152274 | orchestrator | 2026-01-05 01:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:08.196918 | orchestrator | 2026-01-05 01:52:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:08.197948 | orchestrator | 2026-01-05 01:52:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:08.198077 | orchestrator | 2026-01-05 01:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:11.255649 | orchestrator | 2026-01-05 01:52:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:11.256911 | orchestrator | 2026-01-05 01:52:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:11.257037 | orchestrator | 2026-01-05 01:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:14.307213 | orchestrator | 2026-01-05 01:52:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:14.310179 | orchestrator | 2026-01-05 01:52:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:14.310242 | orchestrator | 2026-01-05 01:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:17.365331 | orchestrator | 2026-01-05 01:52:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:17.367140 | orchestrator | 2026-01-05 01:52:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:17.367204 | orchestrator | 2026-01-05 01:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:20.423321 | orchestrator | 2026-01-05 01:52:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:20.426057 | orchestrator | 2026-01-05 01:52:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:20.426457 | orchestrator | 2026-01-05 01:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:23.481312 | orchestrator | 2026-01-05 01:52:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:23.485535 | orchestrator | 2026-01-05 01:52:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:23.485638 | orchestrator | 2026-01-05 01:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:26.535356 | orchestrator | 2026-01-05 01:52:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:26.538219 | orchestrator | 2026-01-05 01:52:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:26.538273 | orchestrator | 2026-01-05 01:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:29.595113 | orchestrator | 2026-01-05 01:52:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:29.596647 | orchestrator | 2026-01-05 01:52:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:29.596729 | orchestrator | 2026-01-05 01:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:32.648025 | orchestrator | 2026-01-05 01:52:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:32.650963 | orchestrator | 2026-01-05 01:52:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:32.651054 | orchestrator | 2026-01-05 01:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:35.695169 | orchestrator | 2026-01-05 01:52:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:35.698206 | orchestrator | 2026-01-05 01:52:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:35.698631 | orchestrator | 2026-01-05 01:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:38.755386 | orchestrator | 2026-01-05 01:52:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:38.757915 | orchestrator | 2026-01-05 01:52:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:38.758541 | orchestrator | 2026-01-05 01:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:41.811314 | orchestrator | 2026-01-05 01:52:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:41.814150 | orchestrator | 2026-01-05 01:52:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:41.814219 | orchestrator | 2026-01-05 01:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:44.859791 | orchestrator | 2026-01-05 01:52:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:44.861722 | orchestrator | 2026-01-05 01:52:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:44.861836 | orchestrator | 2026-01-05 01:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:47.908046 | orchestrator | 2026-01-05 01:52:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:47.909853 | orchestrator | 2026-01-05 01:52:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:47.909919 | orchestrator | 2026-01-05 01:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:50.956201 | orchestrator | 2026-01-05 01:52:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:50.958737 | orchestrator | 2026-01-05 01:52:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:50.958857 | orchestrator | 2026-01-05 01:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:54.009132 | orchestrator | 2026-01-05 01:52:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:54.009310 | orchestrator | 2026-01-05 01:52:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:54.009322 | orchestrator | 2026-01-05 01:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:57.064541 | orchestrator | 2026-01-05 01:52:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:52:57.066408 | orchestrator | 2026-01-05 01:52:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:52:57.066496 | orchestrator | 2026-01-05 01:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:00.115918 | orchestrator | 2026-01-05 01:53:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:00.117837 | orchestrator | 2026-01-05 01:53:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:00.117883 | orchestrator | 2026-01-05 01:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:03.159270 | orchestrator | 2026-01-05 01:53:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:03.160959 | orchestrator | 2026-01-05 01:53:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:03.161043 | orchestrator | 2026-01-05 01:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:06.207123 | orchestrator | 2026-01-05 01:53:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:06.208430 | orchestrator | 2026-01-05 01:53:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:06.208481 | orchestrator | 2026-01-05 01:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:09.255234 | orchestrator | 2026-01-05 01:53:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:09.257953 | orchestrator | 2026-01-05 01:53:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:09.258179 | orchestrator | 2026-01-05 01:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:12.312052 | orchestrator | 2026-01-05 01:53:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:12.313623 | orchestrator | 2026-01-05 01:53:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:12.313680 | orchestrator | 2026-01-05 01:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:15.360827 | orchestrator | 2026-01-05 01:53:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:15.361956 | orchestrator | 2026-01-05 01:53:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:15.361983 | orchestrator | 2026-01-05 01:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:18.405200 | orchestrator | 2026-01-05 01:53:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:18.407819 | orchestrator | 2026-01-05 01:53:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:18.407937 | orchestrator | 2026-01-05 01:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:21.444607 | orchestrator | 2026-01-05 01:53:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:21.446937 | orchestrator | 2026-01-05 01:53:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:21.447019 | orchestrator | 2026-01-05 01:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:24.492958 | orchestrator | 2026-01-05 01:53:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:24.494695 | orchestrator | 2026-01-05 01:53:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:24.494765 | orchestrator | 2026-01-05 01:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:27.545516 | orchestrator | 2026-01-05 01:53:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:27.547528 | orchestrator | 2026-01-05 01:53:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:27.547596 | orchestrator | 2026-01-05 01:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:30.595112 | orchestrator | 2026-01-05 01:53:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:30.596577 | orchestrator | 2026-01-05 01:53:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:30.596616 | orchestrator | 2026-01-05 01:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:33.649197 | orchestrator | 2026-01-05 01:53:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:33.650695 | orchestrator | 2026-01-05 01:53:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:33.650746 | orchestrator | 2026-01-05 01:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:36.702831 | orchestrator | 2026-01-05 01:53:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:36.704288 | orchestrator | 2026-01-05 01:53:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:36.704345 | orchestrator | 2026-01-05 01:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:39.756949 | orchestrator | 2026-01-05 01:53:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:39.759702 | orchestrator | 2026-01-05 01:53:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:39.759788 | orchestrator | 2026-01-05 01:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:42.811794 | orchestrator | 2026-01-05 01:53:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:42.813124 | orchestrator | 2026-01-05 01:53:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:42.813174 | orchestrator | 2026-01-05 01:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:45.862506 | orchestrator | 2026-01-05 01:53:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:45.864216 | orchestrator | 2026-01-05 01:53:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:45.864365 | orchestrator | 2026-01-05 01:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:48.911118 | orchestrator | 2026-01-05 01:53:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:48.913135 | orchestrator | 2026-01-05 01:53:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:48.913206 | orchestrator | 2026-01-05 01:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:51.960182 | orchestrator | 2026-01-05 01:53:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:51.962077 | orchestrator | 2026-01-05 01:53:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:51.962190 | orchestrator | 2026-01-05 01:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:55.009695 | orchestrator | 2026-01-05 01:53:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:55.011692 | orchestrator | 2026-01-05 01:53:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:55.011773 | orchestrator | 2026-01-05 01:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:58.059165 | orchestrator | 2026-01-05 01:53:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:53:58.059914 | orchestrator | 2026-01-05 01:53:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:53:58.060074 | orchestrator | 2026-01-05 01:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:01.102628 | orchestrator | 2026-01-05 01:54:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:01.103229 | orchestrator | 2026-01-05 01:54:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:01.103276 | orchestrator | 2026-01-05 01:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:04.148388 | orchestrator | 2026-01-05 01:54:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:04.149733 | orchestrator | 2026-01-05 01:54:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:04.149837 | orchestrator | 2026-01-05 01:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:07.193878 | orchestrator | 2026-01-05 01:54:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:07.195640 | orchestrator | 2026-01-05 01:54:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:07.195708 | orchestrator | 2026-01-05 01:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:10.244031 | orchestrator | 2026-01-05 01:54:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:10.245884 | orchestrator | 2026-01-05 01:54:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:10.245966 | orchestrator | 2026-01-05 01:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:13.295141 | orchestrator | 2026-01-05 01:54:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:13.296173 | orchestrator | 2026-01-05 01:54:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:13.296220 | orchestrator | 2026-01-05 01:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:16.344470 | orchestrator | 2026-01-05 01:54:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:16.346750 | orchestrator | 2026-01-05 01:54:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:16.346806 | orchestrator | 2026-01-05 01:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:19.389992 | orchestrator | 2026-01-05 01:54:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:19.392072 | orchestrator | 2026-01-05 01:54:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:19.392166 | orchestrator | 2026-01-05 01:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:22.441814 | orchestrator | 2026-01-05 01:54:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:22.443630 | orchestrator | 2026-01-05 01:54:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:22.443729 | orchestrator | 2026-01-05 01:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:25.490570 | orchestrator | 2026-01-05 01:54:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:25.492544 | orchestrator | 2026-01-05 01:54:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:25.492611 | orchestrator | 2026-01-05 01:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:28.542161 | orchestrator | 2026-01-05 01:54:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:28.544593 | orchestrator | 2026-01-05 01:54:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:28.544745 | orchestrator | 2026-01-05 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:31.592817 | orchestrator | 2026-01-05 01:54:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:31.595235 | orchestrator | 2026-01-05 01:54:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:31.595278 | orchestrator | 2026-01-05 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:34.637782 | orchestrator | 2026-01-05 01:54:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:34.639469 | orchestrator | 2026-01-05 01:54:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:34.639530 | orchestrator | 2026-01-05 01:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:37.698654 | orchestrator | 2026-01-05 01:54:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:37.701260 | orchestrator | 2026-01-05 01:54:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:37.701321 | orchestrator | 2026-01-05 01:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:40.754980 | orchestrator | 2026-01-05 01:54:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:40.756804 | orchestrator | 2026-01-05 01:54:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:40.756876 | orchestrator | 2026-01-05 01:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:43.808939 | orchestrator | 2026-01-05 01:54:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:43.811496 | orchestrator | 2026-01-05 01:54:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:43.811548 | orchestrator | 2026-01-05 01:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:46.850417 | orchestrator | 2026-01-05 01:54:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:46.850997 | orchestrator | 2026-01-05 01:54:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:46.851019 | orchestrator | 2026-01-05 01:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:49.899191 | orchestrator | 2026-01-05 01:54:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:49.900144 | orchestrator | 2026-01-05 01:54:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:49.900204 | orchestrator | 2026-01-05 01:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:52.951791 | orchestrator | 2026-01-05 01:54:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:52.953893 | orchestrator | 2026-01-05 01:54:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:52.953952 | orchestrator | 2026-01-05 01:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:56.003534 | orchestrator | 2026-01-05 01:54:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:56.005857 | orchestrator | 2026-01-05 01:54:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:56.005978 | orchestrator | 2026-01-05 01:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:59.057623 | orchestrator | 2026-01-05 01:54:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:54:59.060653 | orchestrator | 2026-01-05 01:54:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:54:59.060753 | orchestrator | 2026-01-05 01:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:02.111771 | orchestrator | 2026-01-05 01:55:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:02.112360 | orchestrator | 2026-01-05 01:55:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:02.112388 | orchestrator | 2026-01-05 01:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:05.162457 | orchestrator | 2026-01-05 01:55:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:05.163300 | orchestrator | 2026-01-05 01:55:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:05.163350 | orchestrator | 2026-01-05 01:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:08.215910 | orchestrator | 2026-01-05 01:55:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:08.219441 | orchestrator | 2026-01-05 01:55:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:08.219507 | orchestrator | 2026-01-05 01:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:11.274708 | orchestrator | 2026-01-05 01:55:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:11.278590 | orchestrator | 2026-01-05 01:55:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:11.278655 | orchestrator | 2026-01-05 01:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:14.328336 | orchestrator | 2026-01-05 01:55:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:14.331065 | orchestrator | 2026-01-05 01:55:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:14.331199 | orchestrator | 2026-01-05 01:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:17.384400 | orchestrator | 2026-01-05 01:55:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:17.386534 | orchestrator | 2026-01-05 01:55:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:17.386650 | orchestrator | 2026-01-05 01:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:20.438489 | orchestrator | 2026-01-05 01:55:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:20.440241 | orchestrator | 2026-01-05 01:55:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:20.440303 | orchestrator | 2026-01-05 01:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:23.486852 | orchestrator | 2026-01-05 01:55:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:23.488391 | orchestrator | 2026-01-05 01:55:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:23.488454 | orchestrator | 2026-01-05 01:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:26.542112 | orchestrator | 2026-01-05 01:55:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:26.544776 | orchestrator | 2026-01-05 01:55:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:26.544877 | orchestrator | 2026-01-05 01:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:29.606844 | orchestrator | 2026-01-05 01:55:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:29.609254 | orchestrator | 2026-01-05 01:55:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:29.609330 | orchestrator | 2026-01-05 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:32.655325 | orchestrator | 2026-01-05 01:55:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:32.658337 | orchestrator | 2026-01-05 01:55:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:32.658413 | orchestrator | 2026-01-05 01:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:35.706933 | orchestrator | 2026-01-05 01:55:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:35.709468 | orchestrator | 2026-01-05 01:55:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:35.709999 | orchestrator | 2026-01-05 01:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:38.756295 | orchestrator | 2026-01-05 01:55:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:38.758815 | orchestrator | 2026-01-05 01:55:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:38.759220 | orchestrator | 2026-01-05 01:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:41.809131 | orchestrator | 2026-01-05 01:55:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:41.810699 | orchestrator | 2026-01-05 01:55:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:41.810741 | orchestrator | 2026-01-05 01:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:44.861277 | orchestrator | 2026-01-05 01:55:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:44.862462 | orchestrator | 2026-01-05 01:55:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:44.862496 | orchestrator | 2026-01-05 01:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:47.902334 | orchestrator | 2026-01-05 01:55:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:47.904429 | orchestrator | 2026-01-05 01:55:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:47.904579 | orchestrator | 2026-01-05 01:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:50.940759 | orchestrator | 2026-01-05 01:55:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:50.942226 | orchestrator | 2026-01-05 01:55:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:50.942368 | orchestrator | 2026-01-05 01:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:53.988219 | orchestrator | 2026-01-05 01:55:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:53.988999 | orchestrator | 2026-01-05 01:55:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:53.989045 | orchestrator | 2026-01-05 01:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:57.040911 | orchestrator | 2026-01-05 01:55:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:55:57.041085 | orchestrator | 2026-01-05 01:55:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:55:57.041097 | orchestrator | 2026-01-05 01:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:00.092636 | orchestrator | 2026-01-05 01:56:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:00.095152 | orchestrator | 2026-01-05 01:56:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:00.095207 | orchestrator | 2026-01-05 01:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:03.136691 | orchestrator | 2026-01-05 01:56:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:03.137328 | orchestrator | 2026-01-05 01:56:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:03.137363 | orchestrator | 2026-01-05 01:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:06.180964 | orchestrator | 2026-01-05 01:56:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:06.182472 | orchestrator | 2026-01-05 01:56:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:06.182532 | orchestrator | 2026-01-05 01:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:09.232201 | orchestrator | 2026-01-05 01:56:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:09.233321 | orchestrator | 2026-01-05 01:56:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:09.233399 | orchestrator | 2026-01-05 01:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:12.276017 | orchestrator | 2026-01-05 01:56:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:12.278180 | orchestrator | 2026-01-05 01:56:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:12.278261 | orchestrator | 2026-01-05 01:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:15.335922 | orchestrator | 2026-01-05 01:56:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:15.337575 | orchestrator | 2026-01-05 01:56:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:15.337669 | orchestrator | 2026-01-05 01:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:18.390636 | orchestrator | 2026-01-05 01:56:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:18.393094 | orchestrator | 2026-01-05 01:56:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:18.393170 | orchestrator | 2026-01-05 01:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:21.437744 | orchestrator | 2026-01-05 01:56:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:21.439727 | orchestrator | 2026-01-05 01:56:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:21.439808 | orchestrator | 2026-01-05 01:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:24.486076 | orchestrator | 2026-01-05 01:56:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:24.488271 | orchestrator | 2026-01-05 01:56:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:24.488351 | orchestrator | 2026-01-05 01:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:27.532041 | orchestrator | 2026-01-05 01:56:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:27.534733 | orchestrator | 2026-01-05 01:56:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:27.534838 | orchestrator | 2026-01-05 01:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:30.578605 | orchestrator | 2026-01-05 01:56:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:30.580155 | orchestrator | 2026-01-05 01:56:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:30.580294 | orchestrator | 2026-01-05 01:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:33.628928 | orchestrator | 2026-01-05 01:56:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:33.630769 | orchestrator | 2026-01-05 01:56:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:33.630831 | orchestrator | 2026-01-05 01:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:36.680959 | orchestrator | 2026-01-05 01:56:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:36.682929 | orchestrator | 2026-01-05 01:56:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:36.683075 | orchestrator | 2026-01-05 01:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:39.734569 | orchestrator | 2026-01-05 01:56:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:39.737789 | orchestrator | 2026-01-05 01:56:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:39.737885 | orchestrator | 2026-01-05 01:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:42.787831 | orchestrator | 2026-01-05 01:56:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:42.790632 | orchestrator | 2026-01-05 01:56:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:42.790709 | orchestrator | 2026-01-05 01:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:45.836504 | orchestrator | 2026-01-05 01:56:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:45.839229 | orchestrator | 2026-01-05 01:56:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:45.839320 | orchestrator | 2026-01-05 01:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:48.893478 | orchestrator | 2026-01-05 01:56:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:48.894908 | orchestrator | 2026-01-05 01:56:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:48.894950 | orchestrator | 2026-01-05 01:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:51.943990 | orchestrator | 2026-01-05 01:56:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:51.945937 | orchestrator | 2026-01-05 01:56:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:51.946008 | orchestrator | 2026-01-05 01:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:54.990447 | orchestrator | 2026-01-05 01:56:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:54.991382 | orchestrator | 2026-01-05 01:56:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:54.991446 | orchestrator | 2026-01-05 01:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:58.044629 | orchestrator | 2026-01-05 01:56:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:56:58.046247 | orchestrator | 2026-01-05 01:56:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:56:58.046297 | orchestrator | 2026-01-05 01:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:01.096391 | orchestrator | 2026-01-05 01:57:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:01.099987 | orchestrator | 2026-01-05 01:57:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:01.100060 | orchestrator | 2026-01-05 01:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:04.150339 | orchestrator | 2026-01-05 01:57:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:04.151319 | orchestrator | 2026-01-05 01:57:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:04.151364 | orchestrator | 2026-01-05 01:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:07.198975 | orchestrator | 2026-01-05 01:57:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:07.200952 | orchestrator | 2026-01-05 01:57:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:07.201018 | orchestrator | 2026-01-05 01:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:10.252319 | orchestrator | 2026-01-05 01:57:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:10.254368 | orchestrator | 2026-01-05 01:57:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:10.254420 | orchestrator | 2026-01-05 01:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:13.303261 | orchestrator | 2026-01-05 01:57:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:13.305097 | orchestrator | 2026-01-05 01:57:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:13.305159 | orchestrator | 2026-01-05 01:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:16.352984 | orchestrator | 2026-01-05 01:57:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:16.353821 | orchestrator | 2026-01-05 01:57:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:16.353889 | orchestrator | 2026-01-05 01:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:19.412455 | orchestrator | 2026-01-05 01:57:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:19.414294 | orchestrator | 2026-01-05 01:57:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:19.414399 | orchestrator | 2026-01-05 01:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:22.457009 | orchestrator | 2026-01-05 01:57:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:22.459117 | orchestrator | 2026-01-05 01:57:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:22.459526 | orchestrator | 2026-01-05 01:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:25.502853 | orchestrator | 2026-01-05 01:57:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:25.503610 | orchestrator | 2026-01-05 01:57:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:25.503789 | orchestrator | 2026-01-05 01:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:28.549269 | orchestrator | 2026-01-05 01:57:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:28.551760 | orchestrator | 2026-01-05 01:57:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:28.551823 | orchestrator | 2026-01-05 01:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:31.600389 | orchestrator | 2026-01-05 01:57:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:31.601791 | orchestrator | 2026-01-05 01:57:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:31.602160 | orchestrator | 2026-01-05 01:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:34.650275 | orchestrator | 2026-01-05 01:57:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:34.652091 | orchestrator | 2026-01-05 01:57:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:34.652185 | orchestrator | 2026-01-05 01:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:37.704432 | orchestrator | 2026-01-05 01:57:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:37.705969 | orchestrator | 2026-01-05 01:57:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:37.706046 | orchestrator | 2026-01-05 01:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:40.756083 | orchestrator | 2026-01-05 01:57:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:40.758155 | orchestrator | 2026-01-05 01:57:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:40.758256 | orchestrator | 2026-01-05 01:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:43.812844 | orchestrator | 2026-01-05 01:57:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:43.814970 | orchestrator | 2026-01-05 01:57:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:43.815024 | orchestrator | 2026-01-05 01:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:46.869654 | orchestrator | 2026-01-05 01:57:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:46.871122 | orchestrator | 2026-01-05 01:57:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:46.871214 | orchestrator | 2026-01-05 01:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:49.917455 | orchestrator | 2026-01-05 01:57:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:49.919729 | orchestrator | 2026-01-05 01:57:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:49.919787 | orchestrator | 2026-01-05 01:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:52.967922 | orchestrator | 2026-01-05 01:57:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:52.969944 | orchestrator | 2026-01-05 01:57:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:52.970010 | orchestrator | 2026-01-05 01:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:56.020583 | orchestrator | 2026-01-05 01:57:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:56.021715 | orchestrator | 2026-01-05 01:57:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:56.021850 | orchestrator | 2026-01-05 01:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:59.073772 | orchestrator | 2026-01-05 01:57:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:57:59.074800 | orchestrator | 2026-01-05 01:57:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:57:59.074842 | orchestrator | 2026-01-05 01:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:02.123575 | orchestrator | 2026-01-05 01:58:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:02.123871 | orchestrator | 2026-01-05 01:58:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:02.124345 | orchestrator | 2026-01-05 01:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:05.167977 | orchestrator | 2026-01-05 01:58:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:05.169725 | orchestrator | 2026-01-05 01:58:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:05.169799 | orchestrator | 2026-01-05 01:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:08.221156 | orchestrator | 2026-01-05 01:58:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:08.222550 | orchestrator | 2026-01-05 01:58:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:08.222602 | orchestrator | 2026-01-05 01:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:11.268867 | orchestrator | 2026-01-05 01:58:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:11.271862 | orchestrator | 2026-01-05 01:58:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:11.271996 | orchestrator | 2026-01-05 01:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:14.320800 | orchestrator | 2026-01-05 01:58:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:14.322545 | orchestrator | 2026-01-05 01:58:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:14.322741 | orchestrator | 2026-01-05 01:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:17.370410 | orchestrator | 2026-01-05 01:58:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:17.370566 | orchestrator | 2026-01-05 01:58:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:17.370581 | orchestrator | 2026-01-05 01:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:20.422105 | orchestrator | 2026-01-05 01:58:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:20.423724 | orchestrator | 2026-01-05 01:58:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:20.423778 | orchestrator | 2026-01-05 01:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:23.471876 | orchestrator | 2026-01-05 01:58:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:23.473987 | orchestrator | 2026-01-05 01:58:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:23.474118 | orchestrator | 2026-01-05 01:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:26.529376 | orchestrator | 2026-01-05 01:58:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:26.530161 | orchestrator | 2026-01-05 01:58:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:26.530185 | orchestrator | 2026-01-05 01:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:29.580162 | orchestrator | 2026-01-05 01:58:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:29.582555 | orchestrator | 2026-01-05 01:58:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:29.582711 | orchestrator | 2026-01-05 01:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:32.635008 | orchestrator | 2026-01-05 01:58:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:32.637267 | orchestrator | 2026-01-05 01:58:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:32.637476 | orchestrator | 2026-01-05 01:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:35.695492 | orchestrator | 2026-01-05 01:58:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:35.697912 | orchestrator | 2026-01-05 01:58:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:35.697953 | orchestrator | 2026-01-05 01:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:38.748242 | orchestrator | 2026-01-05 01:58:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:38.749676 | orchestrator | 2026-01-05 01:58:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:38.749719 | orchestrator | 2026-01-05 01:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:41.808116 | orchestrator | 2026-01-05 01:58:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:41.809713 | orchestrator | 2026-01-05 01:58:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:41.809786 | orchestrator | 2026-01-05 01:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:44.882353 | orchestrator | 2026-01-05 01:58:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:44.883553 | orchestrator | 2026-01-05 01:58:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:44.883624 | orchestrator | 2026-01-05 01:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:47.929465 | orchestrator | 2026-01-05 01:58:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:47.931845 | orchestrator | 2026-01-05 01:58:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:47.931893 | orchestrator | 2026-01-05 01:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:50.976477 | orchestrator | 2026-01-05 01:58:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:50.978107 | orchestrator | 2026-01-05 01:58:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:50.978153 | orchestrator | 2026-01-05 01:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:54.032511 | orchestrator | 2026-01-05 01:58:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:54.033923 | orchestrator | 2026-01-05 01:58:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:54.033983 | orchestrator | 2026-01-05 01:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:57.084688 | orchestrator | 2026-01-05 01:58:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:58:57.085964 | orchestrator | 2026-01-05 01:58:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:58:57.086074 | orchestrator | 2026-01-05 01:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:00.137965 | orchestrator | 2026-01-05 01:59:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:00.139688 | orchestrator | 2026-01-05 01:59:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:00.139731 | orchestrator | 2026-01-05 01:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:03.192217 | orchestrator | 2026-01-05 01:59:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:03.194260 | orchestrator | 2026-01-05 01:59:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:03.194311 | orchestrator | 2026-01-05 01:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:06.243587 | orchestrator | 2026-01-05 01:59:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:06.244697 | orchestrator | 2026-01-05 01:59:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:06.244927 | orchestrator | 2026-01-05 01:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:09.287852 | orchestrator | 2026-01-05 01:59:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:09.290775 | orchestrator | 2026-01-05 01:59:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:09.290849 | orchestrator | 2026-01-05 01:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:12.338157 | orchestrator | 2026-01-05 01:59:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:12.339884 | orchestrator | 2026-01-05 01:59:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:12.339931 | orchestrator | 2026-01-05 01:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:15.393922 | orchestrator | 2026-01-05 01:59:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:15.394600 | orchestrator | 2026-01-05 01:59:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:15.394697 | orchestrator | 2026-01-05 01:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:18.454162 | orchestrator | 2026-01-05 01:59:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:18.456700 | orchestrator | 2026-01-05 01:59:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:18.456764 | orchestrator | 2026-01-05 01:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:21.498562 | orchestrator | 2026-01-05 01:59:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:21.499682 | orchestrator | 2026-01-05 01:59:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:21.499797 | orchestrator | 2026-01-05 01:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:24.546950 | orchestrator | 2026-01-05 01:59:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:24.549483 | orchestrator | 2026-01-05 01:59:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:24.549566 | orchestrator | 2026-01-05 01:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:27.595313 | orchestrator | 2026-01-05 01:59:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:27.597042 | orchestrator | 2026-01-05 01:59:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:27.597127 | orchestrator | 2026-01-05 01:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:30.648132 | orchestrator | 2026-01-05 01:59:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:30.649144 | orchestrator | 2026-01-05 01:59:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:30.649209 | orchestrator | 2026-01-05 01:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:33.690900 | orchestrator | 2026-01-05 01:59:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:33.692691 | orchestrator | 2026-01-05 01:59:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:33.692780 | orchestrator | 2026-01-05 01:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:36.738178 | orchestrator | 2026-01-05 01:59:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:36.740288 | orchestrator | 2026-01-05 01:59:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:36.740377 | orchestrator | 2026-01-05 01:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:39.791542 | orchestrator | 2026-01-05 01:59:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:39.793551 | orchestrator | 2026-01-05 01:59:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:39.794194 | orchestrator | 2026-01-05 01:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:42.854879 | orchestrator | 2026-01-05 01:59:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:42.856127 | orchestrator | 2026-01-05 01:59:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:42.856372 | orchestrator | 2026-01-05 01:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:45.899093 | orchestrator | 2026-01-05 01:59:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:45.900989 | orchestrator | 2026-01-05 01:59:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:45.901028 | orchestrator | 2026-01-05 01:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:48.951858 | orchestrator | 2026-01-05 01:59:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:48.954186 | orchestrator | 2026-01-05 01:59:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:48.954353 | orchestrator | 2026-01-05 01:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:52.008819 | orchestrator | 2026-01-05 01:59:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:52.011430 | orchestrator | 2026-01-05 01:59:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:52.011518 | orchestrator | 2026-01-05 01:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:55.064438 | orchestrator | 2026-01-05 01:59:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:55.066539 | orchestrator | 2026-01-05 01:59:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:55.066593 | orchestrator | 2026-01-05 01:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:58.115883 | orchestrator | 2026-01-05 01:59:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 01:59:58.118655 | orchestrator | 2026-01-05 01:59:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 01:59:58.118772 | orchestrator | 2026-01-05 01:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:01.161259 | orchestrator | 2026-01-05 02:00:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:01.162772 | orchestrator | 2026-01-05 02:00:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:01.162858 | orchestrator | 2026-01-05 02:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:04.203062 | orchestrator | 2026-01-05 02:00:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:04.203875 | orchestrator | 2026-01-05 02:00:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:04.203914 | orchestrator | 2026-01-05 02:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:07.249470 | orchestrator | 2026-01-05 02:00:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:07.250644 | orchestrator | 2026-01-05 02:00:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:07.250709 | orchestrator | 2026-01-05 02:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:10.305295 | orchestrator | 2026-01-05 02:00:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:10.307781 | orchestrator | 2026-01-05 02:00:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:10.307834 | orchestrator | 2026-01-05 02:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:13.357270 | orchestrator | 2026-01-05 02:00:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:13.358840 | orchestrator | 2026-01-05 02:00:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:13.358967 | orchestrator | 2026-01-05 02:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:16.409347 | orchestrator | 2026-01-05 02:00:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:16.411484 | orchestrator | 2026-01-05 02:00:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:16.411556 | orchestrator | 2026-01-05 02:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:19.458320 | orchestrator | 2026-01-05 02:00:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:19.460573 | orchestrator | 2026-01-05 02:00:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:19.460637 | orchestrator | 2026-01-05 02:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:22.510656 | orchestrator | 2026-01-05 02:00:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:22.512262 | orchestrator | 2026-01-05 02:00:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:22.512380 | orchestrator | 2026-01-05 02:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:25.562731 | orchestrator | 2026-01-05 02:00:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:25.563768 | orchestrator | 2026-01-05 02:00:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:25.563792 | orchestrator | 2026-01-05 02:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:28.613643 | orchestrator | 2026-01-05 02:00:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:28.614347 | orchestrator | 2026-01-05 02:00:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:28.614382 | orchestrator | 2026-01-05 02:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:31.660389 | orchestrator | 2026-01-05 02:00:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:31.661405 | orchestrator | 2026-01-05 02:00:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:31.661535 | orchestrator | 2026-01-05 02:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:34.707832 | orchestrator | 2026-01-05 02:00:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:34.708642 | orchestrator | 2026-01-05 02:00:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:34.708920 | orchestrator | 2026-01-05 02:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:37.755748 | orchestrator | 2026-01-05 02:00:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:37.756918 | orchestrator | 2026-01-05 02:00:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:37.756954 | orchestrator | 2026-01-05 02:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:40.805930 | orchestrator | 2026-01-05 02:00:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:40.807462 | orchestrator | 2026-01-05 02:00:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:40.807503 | orchestrator | 2026-01-05 02:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:43.848470 | orchestrator | 2026-01-05 02:00:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:43.850635 | orchestrator | 2026-01-05 02:00:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:43.850713 | orchestrator | 2026-01-05 02:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:46.900535 | orchestrator | 2026-01-05 02:00:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:46.901803 | orchestrator | 2026-01-05 02:00:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:46.901927 | orchestrator | 2026-01-05 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:49.959204 | orchestrator | 2026-01-05 02:00:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:49.961318 | orchestrator | 2026-01-05 02:00:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:49.961387 | orchestrator | 2026-01-05 02:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:53.015146 | orchestrator | 2026-01-05 02:00:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:53.016813 | orchestrator | 2026-01-05 02:00:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:53.016904 | orchestrator | 2026-01-05 02:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:56.066486 | orchestrator | 2026-01-05 02:00:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:56.073991 | orchestrator | 2026-01-05 02:00:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:56.074138 | orchestrator | 2026-01-05 02:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:59.133951 | orchestrator | 2026-01-05 02:00:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:00:59.136490 | orchestrator | 2026-01-05 02:00:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:00:59.136607 | orchestrator | 2026-01-05 02:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:02.191635 | orchestrator | 2026-01-05 02:01:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:02.193267 | orchestrator | 2026-01-05 02:01:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:02.193499 | orchestrator | 2026-01-05 02:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:05.241442 | orchestrator | 2026-01-05 02:01:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:05.243305 | orchestrator | 2026-01-05 02:01:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:05.243463 | orchestrator | 2026-01-05 02:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:08.291125 | orchestrator | 2026-01-05 02:01:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:08.292934 | orchestrator | 2026-01-05 02:01:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:08.293037 | orchestrator | 2026-01-05 02:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:11.340179 | orchestrator | 2026-01-05 02:01:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:11.341106 | orchestrator | 2026-01-05 02:01:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:11.341366 | orchestrator | 2026-01-05 02:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:14.385729 | orchestrator | 2026-01-05 02:01:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:14.387797 | orchestrator | 2026-01-05 02:01:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:14.387853 | orchestrator | 2026-01-05 02:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:17.434155 | orchestrator | 2026-01-05 02:01:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:17.435647 | orchestrator | 2026-01-05 02:01:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:17.435695 | orchestrator | 2026-01-05 02:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:20.479642 | orchestrator | 2026-01-05 02:01:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:20.480067 | orchestrator | 2026-01-05 02:01:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:20.480132 | orchestrator | 2026-01-05 02:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:23.523364 | orchestrator | 2026-01-05 02:01:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:23.523843 | orchestrator | 2026-01-05 02:01:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:23.523868 | orchestrator | 2026-01-05 02:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:26.570205 | orchestrator | 2026-01-05 02:01:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:26.572292 | orchestrator | 2026-01-05 02:01:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:26.572410 | orchestrator | 2026-01-05 02:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:29.621098 | orchestrator | 2026-01-05 02:01:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:29.622985 | orchestrator | 2026-01-05 02:01:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:29.623085 | orchestrator | 2026-01-05 02:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:32.671764 | orchestrator | 2026-01-05 02:01:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:32.673722 | orchestrator | 2026-01-05 02:01:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:32.673805 | orchestrator | 2026-01-05 02:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:35.720163 | orchestrator | 2026-01-05 02:01:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:35.721800 | orchestrator | 2026-01-05 02:01:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:35.721896 | orchestrator | 2026-01-05 02:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:38.774397 | orchestrator | 2026-01-05 02:01:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:38.775911 | orchestrator | 2026-01-05 02:01:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:38.776066 | orchestrator | 2026-01-05 02:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:41.822549 | orchestrator | 2026-01-05 02:01:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:41.825560 | orchestrator | 2026-01-05 02:01:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:41.825728 | orchestrator | 2026-01-05 02:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:44.870754 | orchestrator | 2026-01-05 02:01:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:44.872064 | orchestrator | 2026-01-05 02:01:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:44.872164 | orchestrator | 2026-01-05 02:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:47.915952 | orchestrator | 2026-01-05 02:01:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:47.917121 | orchestrator | 2026-01-05 02:01:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:47.917161 | orchestrator | 2026-01-05 02:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:50.959363 | orchestrator | 2026-01-05 02:01:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:50.960332 | orchestrator | 2026-01-05 02:01:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:50.960419 | orchestrator | 2026-01-05 02:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:54.017203 | orchestrator | 2026-01-05 02:01:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:54.018941 | orchestrator | 2026-01-05 02:01:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:54.019053 | orchestrator | 2026-01-05 02:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:57.062650 | orchestrator | 2026-01-05 02:01:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:01:57.064664 | orchestrator | 2026-01-05 02:01:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:01:57.064879 | orchestrator | 2026-01-05 02:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:00.111456 | orchestrator | 2026-01-05 02:02:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:00.114196 | orchestrator | 2026-01-05 02:02:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:00.114265 | orchestrator | 2026-01-05 02:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:03.163919 | orchestrator | 2026-01-05 02:02:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:03.164461 | orchestrator | 2026-01-05 02:02:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:03.165167 | orchestrator | 2026-01-05 02:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:06.211395 | orchestrator | 2026-01-05 02:02:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:06.212914 | orchestrator | 2026-01-05 02:02:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:06.213006 | orchestrator | 2026-01-05 02:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:09.258382 | orchestrator | 2026-01-05 02:02:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:09.259119 | orchestrator | 2026-01-05 02:02:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:09.259137 | orchestrator | 2026-01-05 02:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:12.309999 | orchestrator | 2026-01-05 02:02:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:12.311961 | orchestrator | 2026-01-05 02:02:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:12.312041 | orchestrator | 2026-01-05 02:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:15.361788 | orchestrator | 2026-01-05 02:02:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:15.363781 | orchestrator | 2026-01-05 02:02:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:15.363821 | orchestrator | 2026-01-05 02:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:18.415339 | orchestrator | 2026-01-05 02:02:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:18.417042 | orchestrator | 2026-01-05 02:02:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:18.417117 | orchestrator | 2026-01-05 02:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:21.458630 | orchestrator | 2026-01-05 02:02:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:21.460433 | orchestrator | 2026-01-05 02:02:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:21.460571 | orchestrator | 2026-01-05 02:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:24.504334 | orchestrator | 2026-01-05 02:02:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:24.504824 | orchestrator | 2026-01-05 02:02:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:24.504856 | orchestrator | 2026-01-05 02:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:27.545009 | orchestrator | 2026-01-05 02:02:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:27.546041 | orchestrator | 2026-01-05 02:02:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:27.546068 | orchestrator | 2026-01-05 02:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:30.597225 | orchestrator | 2026-01-05 02:02:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:30.599493 | orchestrator | 2026-01-05 02:02:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:30.599537 | orchestrator | 2026-01-05 02:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:33.647246 | orchestrator | 2026-01-05 02:02:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:33.649215 | orchestrator | 2026-01-05 02:02:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:33.649273 | orchestrator | 2026-01-05 02:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:36.697671 | orchestrator | 2026-01-05 02:02:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:36.699390 | orchestrator | 2026-01-05 02:02:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:36.699492 | orchestrator | 2026-01-05 02:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:39.753085 | orchestrator | 2026-01-05 02:02:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:39.754521 | orchestrator | 2026-01-05 02:02:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:39.754647 | orchestrator | 2026-01-05 02:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:42.803394 | orchestrator | 2026-01-05 02:02:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:42.804927 | orchestrator | 2026-01-05 02:02:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:42.804975 | orchestrator | 2026-01-05 02:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:45.859927 | orchestrator | 2026-01-05 02:02:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:45.862407 | orchestrator | 2026-01-05 02:02:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:45.862479 | orchestrator | 2026-01-05 02:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:48.912403 | orchestrator | 2026-01-05 02:02:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:48.913735 | orchestrator | 2026-01-05 02:02:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:48.913791 | orchestrator | 2026-01-05 02:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:51.961473 | orchestrator | 2026-01-05 02:02:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:51.963599 | orchestrator | 2026-01-05 02:02:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:51.963656 | orchestrator | 2026-01-05 02:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:55.008660 | orchestrator | 2026-01-05 02:02:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:55.010555 | orchestrator | 2026-01-05 02:02:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:55.010603 | orchestrator | 2026-01-05 02:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:58.061322 | orchestrator | 2026-01-05 02:02:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:02:58.062493 | orchestrator | 2026-01-05 02:02:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:02:58.062527 | orchestrator | 2026-01-05 02:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:01.102717 | orchestrator | 2026-01-05 02:03:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:01.103392 | orchestrator | 2026-01-05 02:03:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:01.103418 | orchestrator | 2026-01-05 02:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:04.150912 | orchestrator | 2026-01-05 02:03:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:04.153710 | orchestrator | 2026-01-05 02:03:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:04.153790 | orchestrator | 2026-01-05 02:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:07.199992 | orchestrator | 2026-01-05 02:03:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:07.201438 | orchestrator | 2026-01-05 02:03:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:07.201479 | orchestrator | 2026-01-05 02:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:10.247586 | orchestrator | 2026-01-05 02:03:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:10.250134 | orchestrator | 2026-01-05 02:03:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:10.250186 | orchestrator | 2026-01-05 02:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:13.297308 | orchestrator | 2026-01-05 02:03:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:13.297593 | orchestrator | 2026-01-05 02:03:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:13.297634 | orchestrator | 2026-01-05 02:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:16.352176 | orchestrator | 2026-01-05 02:03:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:16.355719 | orchestrator | 2026-01-05 02:03:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:16.355802 | orchestrator | 2026-01-05 02:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:19.411296 | orchestrator | 2026-01-05 02:03:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:19.413453 | orchestrator | 2026-01-05 02:03:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:19.413538 | orchestrator | 2026-01-05 02:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:22.452328 | orchestrator | 2026-01-05 02:03:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:22.452775 | orchestrator | 2026-01-05 02:03:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:22.452797 | orchestrator | 2026-01-05 02:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:25.500997 | orchestrator | 2026-01-05 02:03:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:25.503384 | orchestrator | 2026-01-05 02:03:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:25.503461 | orchestrator | 2026-01-05 02:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:28.553696 | orchestrator | 2026-01-05 02:03:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:28.556937 | orchestrator | 2026-01-05 02:03:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:28.556984 | orchestrator | 2026-01-05 02:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:31.605257 | orchestrator | 2026-01-05 02:03:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:31.607371 | orchestrator | 2026-01-05 02:03:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:31.607660 | orchestrator | 2026-01-05 02:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:34.662249 | orchestrator | 2026-01-05 02:03:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:34.664835 | orchestrator | 2026-01-05 02:03:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:34.664933 | orchestrator | 2026-01-05 02:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:37.710776 | orchestrator | 2026-01-05 02:03:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:37.713300 | orchestrator | 2026-01-05 02:03:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:37.713389 | orchestrator | 2026-01-05 02:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:40.762637 | orchestrator | 2026-01-05 02:03:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:40.763260 | orchestrator | 2026-01-05 02:03:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:40.763331 | orchestrator | 2026-01-05 02:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:43.817552 | orchestrator | 2026-01-05 02:03:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:43.821187 | orchestrator | 2026-01-05 02:03:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:43.821249 | orchestrator | 2026-01-05 02:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:46.867938 | orchestrator | 2026-01-05 02:03:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:46.870155 | orchestrator | 2026-01-05 02:03:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:46.870231 | orchestrator | 2026-01-05 02:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:49.912187 | orchestrator | 2026-01-05 02:03:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:49.912627 | orchestrator | 2026-01-05 02:03:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:49.912658 | orchestrator | 2026-01-05 02:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:52.964506 | orchestrator | 2026-01-05 02:03:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:52.965918 | orchestrator | 2026-01-05 02:03:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:52.966062 | orchestrator | 2026-01-05 02:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:56.017504 | orchestrator | 2026-01-05 02:03:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:56.020913 | orchestrator | 2026-01-05 02:03:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:56.020984 | orchestrator | 2026-01-05 02:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:59.075517 | orchestrator | 2026-01-05 02:03:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:03:59.079290 | orchestrator | 2026-01-05 02:03:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:03:59.079380 | orchestrator | 2026-01-05 02:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:02.137139 | orchestrator | 2026-01-05 02:04:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:02.138917 | orchestrator | 2026-01-05 02:04:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:02.138964 | orchestrator | 2026-01-05 02:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:05.192004 | orchestrator | 2026-01-05 02:04:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:05.193895 | orchestrator | 2026-01-05 02:04:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:05.194053 | orchestrator | 2026-01-05 02:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:08.247554 | orchestrator | 2026-01-05 02:04:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:08.250441 | orchestrator | 2026-01-05 02:04:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:08.250514 | orchestrator | 2026-01-05 02:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:11.299220 | orchestrator | 2026-01-05 02:04:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:11.300154 | orchestrator | 2026-01-05 02:04:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:11.300251 | orchestrator | 2026-01-05 02:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:14.352675 | orchestrator | 2026-01-05 02:04:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:14.354886 | orchestrator | 2026-01-05 02:04:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:14.354932 | orchestrator | 2026-01-05 02:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:17.409497 | orchestrator | 2026-01-05 02:04:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:17.412587 | orchestrator | 2026-01-05 02:04:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:17.412657 | orchestrator | 2026-01-05 02:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:20.458531 | orchestrator | 2026-01-05 02:04:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:20.460892 | orchestrator | 2026-01-05 02:04:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:20.460979 | orchestrator | 2026-01-05 02:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:23.497520 | orchestrator | 2026-01-05 02:04:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:23.498332 | orchestrator | 2026-01-05 02:04:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:23.498381 | orchestrator | 2026-01-05 02:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:26.550244 | orchestrator | 2026-01-05 02:04:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:26.551645 | orchestrator | 2026-01-05 02:04:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:26.551689 | orchestrator | 2026-01-05 02:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:29.602481 | orchestrator | 2026-01-05 02:04:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:29.604262 | orchestrator | 2026-01-05 02:04:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:29.604298 | orchestrator | 2026-01-05 02:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:32.660004 | orchestrator | 2026-01-05 02:04:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:32.662500 | orchestrator | 2026-01-05 02:04:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:32.662561 | orchestrator | 2026-01-05 02:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:35.717630 | orchestrator | 2026-01-05 02:04:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:35.720091 | orchestrator | 2026-01-05 02:04:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:35.720139 | orchestrator | 2026-01-05 02:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:38.772484 | orchestrator | 2026-01-05 02:04:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:38.774448 | orchestrator | 2026-01-05 02:04:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:38.774860 | orchestrator | 2026-01-05 02:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:41.822106 | orchestrator | 2026-01-05 02:04:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:41.824279 | orchestrator | 2026-01-05 02:04:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:41.824562 | orchestrator | 2026-01-05 02:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:44.874444 | orchestrator | 2026-01-05 02:04:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:44.876074 | orchestrator | 2026-01-05 02:04:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:44.876145 | orchestrator | 2026-01-05 02:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:47.919151 | orchestrator | 2026-01-05 02:04:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:47.919429 | orchestrator | 2026-01-05 02:04:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:47.919464 | orchestrator | 2026-01-05 02:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:50.966306 | orchestrator | 2026-01-05 02:04:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:50.968198 | orchestrator | 2026-01-05 02:04:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:50.968257 | orchestrator | 2026-01-05 02:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:54.024012 | orchestrator | 2026-01-05 02:04:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:54.024169 | orchestrator | 2026-01-05 02:04:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:54.024185 | orchestrator | 2026-01-05 02:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:57.073934 | orchestrator | 2026-01-05 02:04:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:04:57.076395 | orchestrator | 2026-01-05 02:04:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:04:57.076510 | orchestrator | 2026-01-05 02:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:00.122419 | orchestrator | 2026-01-05 02:05:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:00.123720 | orchestrator | 2026-01-05 02:05:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:00.123893 | orchestrator | 2026-01-05 02:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:03.173911 | orchestrator | 2026-01-05 02:05:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:03.174124 | orchestrator | 2026-01-05 02:05:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:03.174145 | orchestrator | 2026-01-05 02:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:06.227505 | orchestrator | 2026-01-05 02:05:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:06.229469 | orchestrator | 2026-01-05 02:05:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:06.229545 | orchestrator | 2026-01-05 02:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:09.275712 | orchestrator | 2026-01-05 02:05:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:09.277355 | orchestrator | 2026-01-05 02:05:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:09.277462 | orchestrator | 2026-01-05 02:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:12.328850 | orchestrator | 2026-01-05 02:05:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:12.331539 | orchestrator | 2026-01-05 02:05:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:12.331603 | orchestrator | 2026-01-05 02:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:15.377111 | orchestrator | 2026-01-05 02:05:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:15.379960 | orchestrator | 2026-01-05 02:05:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:15.380028 | orchestrator | 2026-01-05 02:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:18.429088 | orchestrator | 2026-01-05 02:05:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:18.432870 | orchestrator | 2026-01-05 02:05:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:18.432948 | orchestrator | 2026-01-05 02:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:21.489360 | orchestrator | 2026-01-05 02:05:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:21.490873 | orchestrator | 2026-01-05 02:05:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:21.490933 | orchestrator | 2026-01-05 02:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:24.540921 | orchestrator | 2026-01-05 02:05:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:24.541921 | orchestrator | 2026-01-05 02:05:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:24.541967 | orchestrator | 2026-01-05 02:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:27.593253 | orchestrator | 2026-01-05 02:05:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:27.596298 | orchestrator | 2026-01-05 02:05:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:27.596392 | orchestrator | 2026-01-05 02:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:30.646481 | orchestrator | 2026-01-05 02:05:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:30.648011 | orchestrator | 2026-01-05 02:05:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:30.648064 | orchestrator | 2026-01-05 02:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:33.698321 | orchestrator | 2026-01-05 02:05:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:33.700500 | orchestrator | 2026-01-05 02:05:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:33.700552 | orchestrator | 2026-01-05 02:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:36.753055 | orchestrator | 2026-01-05 02:05:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:36.755814 | orchestrator | 2026-01-05 02:05:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:36.755867 | orchestrator | 2026-01-05 02:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:39.805821 | orchestrator | 2026-01-05 02:05:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:39.809588 | orchestrator | 2026-01-05 02:05:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:39.809718 | orchestrator | 2026-01-05 02:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:42.857478 | orchestrator | 2026-01-05 02:05:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:42.858878 | orchestrator | 2026-01-05 02:05:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:42.858945 | orchestrator | 2026-01-05 02:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:45.907826 | orchestrator | 2026-01-05 02:05:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:45.911360 | orchestrator | 2026-01-05 02:05:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:45.911420 | orchestrator | 2026-01-05 02:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:48.964363 | orchestrator | 2026-01-05 02:05:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:48.968250 | orchestrator | 2026-01-05 02:05:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:48.968422 | orchestrator | 2026-01-05 02:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:52.020395 | orchestrator | 2026-01-05 02:05:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:52.022145 | orchestrator | 2026-01-05 02:05:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:52.022219 | orchestrator | 2026-01-05 02:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:55.080043 | orchestrator | 2026-01-05 02:05:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:55.082559 | orchestrator | 2026-01-05 02:05:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:55.082772 | orchestrator | 2026-01-05 02:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:58.138877 | orchestrator | 2026-01-05 02:05:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:05:58.140673 | orchestrator | 2026-01-05 02:05:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:05:58.140794 | orchestrator | 2026-01-05 02:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:01.192133 | orchestrator | 2026-01-05 02:06:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:01.194154 | orchestrator | 2026-01-05 02:06:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:01.194205 | orchestrator | 2026-01-05 02:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:04.246127 | orchestrator | 2026-01-05 02:06:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:04.248894 | orchestrator | 2026-01-05 02:06:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:04.248957 | orchestrator | 2026-01-05 02:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:07.305959 | orchestrator | 2026-01-05 02:06:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:07.308756 | orchestrator | 2026-01-05 02:06:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:07.308824 | orchestrator | 2026-01-05 02:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:10.359020 | orchestrator | 2026-01-05 02:06:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:10.360488 | orchestrator | 2026-01-05 02:06:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:10.360540 | orchestrator | 2026-01-05 02:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:13.407327 | orchestrator | 2026-01-05 02:06:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:13.408353 | orchestrator | 2026-01-05 02:06:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:13.408451 | orchestrator | 2026-01-05 02:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:16.446525 | orchestrator | 2026-01-05 02:06:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:16.449492 | orchestrator | 2026-01-05 02:06:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:16.449607 | orchestrator | 2026-01-05 02:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:19.493185 | orchestrator | 2026-01-05 02:06:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:19.496840 | orchestrator | 2026-01-05 02:06:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:19.497761 | orchestrator | 2026-01-05 02:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:22.542414 | orchestrator | 2026-01-05 02:06:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:22.544446 | orchestrator | 2026-01-05 02:06:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:22.544495 | orchestrator | 2026-01-05 02:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:25.599349 | orchestrator | 2026-01-05 02:06:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:25.600923 | orchestrator | 2026-01-05 02:06:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:25.600995 | orchestrator | 2026-01-05 02:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:28.651092 | orchestrator | 2026-01-05 02:06:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:28.653296 | orchestrator | 2026-01-05 02:06:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:28.653379 | orchestrator | 2026-01-05 02:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:31.711091 | orchestrator | 2026-01-05 02:06:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:31.713779 | orchestrator | 2026-01-05 02:06:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:31.713843 | orchestrator | 2026-01-05 02:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:34.767020 | orchestrator | 2026-01-05 02:06:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:34.768997 | orchestrator | 2026-01-05 02:06:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:34.769076 | orchestrator | 2026-01-05 02:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:37.813479 | orchestrator | 2026-01-05 02:06:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:37.816746 | orchestrator | 2026-01-05 02:06:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:37.816833 | orchestrator | 2026-01-05 02:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:40.865427 | orchestrator | 2026-01-05 02:06:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:40.867044 | orchestrator | 2026-01-05 02:06:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:40.867120 | orchestrator | 2026-01-05 02:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:43.917590 | orchestrator | 2026-01-05 02:06:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:43.918627 | orchestrator | 2026-01-05 02:06:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:43.918658 | orchestrator | 2026-01-05 02:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:46.971925 | orchestrator | 2026-01-05 02:06:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:46.973182 | orchestrator | 2026-01-05 02:06:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:46.973245 | orchestrator | 2026-01-05 02:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:50.026433 | orchestrator | 2026-01-05 02:06:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:50.028084 | orchestrator | 2026-01-05 02:06:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:50.028189 | orchestrator | 2026-01-05 02:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:53.076755 | orchestrator | 2026-01-05 02:06:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:53.077833 | orchestrator | 2026-01-05 02:06:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:53.077890 | orchestrator | 2026-01-05 02:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:56.121630 | orchestrator | 2026-01-05 02:06:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:56.123108 | orchestrator | 2026-01-05 02:06:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:56.123173 | orchestrator | 2026-01-05 02:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:59.163453 | orchestrator | 2026-01-05 02:06:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:06:59.165875 | orchestrator | 2026-01-05 02:06:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:06:59.167743 | orchestrator | 2026-01-05 02:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:02.222696 | orchestrator | 2026-01-05 02:07:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:02.224941 | orchestrator | 2026-01-05 02:07:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:02.224988 | orchestrator | 2026-01-05 02:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:05.268605 | orchestrator | 2026-01-05 02:07:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:05.270588 | orchestrator | 2026-01-05 02:07:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:05.270628 | orchestrator | 2026-01-05 02:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:08.320132 | orchestrator | 2026-01-05 02:07:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:08.321711 | orchestrator | 2026-01-05 02:07:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:08.321731 | orchestrator | 2026-01-05 02:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:11.377286 | orchestrator | 2026-01-05 02:07:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:11.379573 | orchestrator | 2026-01-05 02:07:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:11.379642 | orchestrator | 2026-01-05 02:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:14.427689 | orchestrator | 2026-01-05 02:07:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:14.429061 | orchestrator | 2026-01-05 02:07:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:14.429106 | orchestrator | 2026-01-05 02:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:17.480923 | orchestrator | 2026-01-05 02:07:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:17.481700 | orchestrator | 2026-01-05 02:07:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:17.481734 | orchestrator | 2026-01-05 02:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:20.530256 | orchestrator | 2026-01-05 02:07:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:20.530937 | orchestrator | 2026-01-05 02:07:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:20.530961 | orchestrator | 2026-01-05 02:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:23.584359 | orchestrator | 2026-01-05 02:07:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:23.586577 | orchestrator | 2026-01-05 02:07:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:23.586648 | orchestrator | 2026-01-05 02:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:26.629266 | orchestrator | 2026-01-05 02:07:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:26.631339 | orchestrator | 2026-01-05 02:07:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:26.631392 | orchestrator | 2026-01-05 02:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:29.679028 | orchestrator | 2026-01-05 02:07:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:29.680660 | orchestrator | 2026-01-05 02:07:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:29.680687 | orchestrator | 2026-01-05 02:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:32.731868 | orchestrator | 2026-01-05 02:07:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:32.734986 | orchestrator | 2026-01-05 02:07:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:32.735032 | orchestrator | 2026-01-05 02:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:35.783947 | orchestrator | 2026-01-05 02:07:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:35.784925 | orchestrator | 2026-01-05 02:07:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:35.785132 | orchestrator | 2026-01-05 02:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:38.844033 | orchestrator | 2026-01-05 02:07:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:38.845518 | orchestrator | 2026-01-05 02:07:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:38.845562 | orchestrator | 2026-01-05 02:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:41.892998 | orchestrator | 2026-01-05 02:07:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:41.896368 | orchestrator | 2026-01-05 02:07:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:41.896455 | orchestrator | 2026-01-05 02:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:44.957765 | orchestrator | 2026-01-05 02:07:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:44.960135 | orchestrator | 2026-01-05 02:07:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:44.960214 | orchestrator | 2026-01-05 02:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:48.016757 | orchestrator | 2026-01-05 02:07:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:48.018127 | orchestrator | 2026-01-05 02:07:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:48.018184 | orchestrator | 2026-01-05 02:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:51.054406 | orchestrator | 2026-01-05 02:07:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:51.055632 | orchestrator | 2026-01-05 02:07:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:51.055681 | orchestrator | 2026-01-05 02:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:54.108977 | orchestrator | 2026-01-05 02:07:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:54.111593 | orchestrator | 2026-01-05 02:07:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:54.111658 | orchestrator | 2026-01-05 02:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:57.158175 | orchestrator | 2026-01-05 02:07:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:07:57.160253 | orchestrator | 2026-01-05 02:07:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:07:57.160700 | orchestrator | 2026-01-05 02:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:00.203054 | orchestrator | 2026-01-05 02:08:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:00.205031 | orchestrator | 2026-01-05 02:08:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:00.205107 | orchestrator | 2026-01-05 02:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:03.255682 | orchestrator | 2026-01-05 02:08:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:03.256984 | orchestrator | 2026-01-05 02:08:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:03.257031 | orchestrator | 2026-01-05 02:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:06.300285 | orchestrator | 2026-01-05 02:08:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:06.302135 | orchestrator | 2026-01-05 02:08:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:06.302222 | orchestrator | 2026-01-05 02:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:09.352296 | orchestrator | 2026-01-05 02:08:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:09.354417 | orchestrator | 2026-01-05 02:08:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:09.354570 | orchestrator | 2026-01-05 02:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:12.403853 | orchestrator | 2026-01-05 02:08:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:12.405660 | orchestrator | 2026-01-05 02:08:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:12.405725 | orchestrator | 2026-01-05 02:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:15.448907 | orchestrator | 2026-01-05 02:08:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:15.450859 | orchestrator | 2026-01-05 02:08:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:15.450891 | orchestrator | 2026-01-05 02:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:18.496068 | orchestrator | 2026-01-05 02:08:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:18.497074 | orchestrator | 2026-01-05 02:08:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:18.497266 | orchestrator | 2026-01-05 02:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:21.545091 | orchestrator | 2026-01-05 02:08:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:21.546188 | orchestrator | 2026-01-05 02:08:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:21.546249 | orchestrator | 2026-01-05 02:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:24.598249 | orchestrator | 2026-01-05 02:08:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:24.599913 | orchestrator | 2026-01-05 02:08:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:24.600023 | orchestrator | 2026-01-05 02:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:27.647136 | orchestrator | 2026-01-05 02:08:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:27.649732 | orchestrator | 2026-01-05 02:08:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:27.649879 | orchestrator | 2026-01-05 02:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:30.683957 | orchestrator | 2026-01-05 02:08:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:30.684823 | orchestrator | 2026-01-05 02:08:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:30.684879 | orchestrator | 2026-01-05 02:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:33.723368 | orchestrator | 2026-01-05 02:08:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:33.724227 | orchestrator | 2026-01-05 02:08:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:33.724266 | orchestrator | 2026-01-05 02:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:36.771571 | orchestrator | 2026-01-05 02:08:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:36.773687 | orchestrator | 2026-01-05 02:08:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:36.773775 | orchestrator | 2026-01-05 02:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:39.820719 | orchestrator | 2026-01-05 02:08:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:39.822001 | orchestrator | 2026-01-05 02:08:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:39.822063 | orchestrator | 2026-01-05 02:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:42.863722 | orchestrator | 2026-01-05 02:08:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:42.865807 | orchestrator | 2026-01-05 02:08:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:42.865925 | orchestrator | 2026-01-05 02:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:45.915219 | orchestrator | 2026-01-05 02:08:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:45.917742 | orchestrator | 2026-01-05 02:08:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:45.917830 | orchestrator | 2026-01-05 02:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:48.965954 | orchestrator | 2026-01-05 02:08:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:48.969639 | orchestrator | 2026-01-05 02:08:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:48.969745 | orchestrator | 2026-01-05 02:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:52.016987 | orchestrator | 2026-01-05 02:08:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:52.018640 | orchestrator | 2026-01-05 02:08:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:52.018694 | orchestrator | 2026-01-05 02:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:55.079492 | orchestrator | 2026-01-05 02:08:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:55.082265 | orchestrator | 2026-01-05 02:08:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:55.082325 | orchestrator | 2026-01-05 02:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:58.135909 | orchestrator | 2026-01-05 02:08:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:08:58.138725 | orchestrator | 2026-01-05 02:08:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:08:58.138818 | orchestrator | 2026-01-05 02:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:01.191501 | orchestrator | 2026-01-05 02:09:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:01.195217 | orchestrator | 2026-01-05 02:09:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:01.195354 | orchestrator | 2026-01-05 02:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:04.251251 | orchestrator | 2026-01-05 02:09:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:04.252841 | orchestrator | 2026-01-05 02:09:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:04.252983 | orchestrator | 2026-01-05 02:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:07.305756 | orchestrator | 2026-01-05 02:09:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:07.308815 | orchestrator | 2026-01-05 02:09:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:07.308873 | orchestrator | 2026-01-05 02:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:10.364950 | orchestrator | 2026-01-05 02:09:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:10.368280 | orchestrator | 2026-01-05 02:09:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:10.368342 | orchestrator | 2026-01-05 02:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:13.428220 | orchestrator | 2026-01-05 02:09:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:13.432623 | orchestrator | 2026-01-05 02:09:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:13.432704 | orchestrator | 2026-01-05 02:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:16.475224 | orchestrator | 2026-01-05 02:09:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:16.476233 | orchestrator | 2026-01-05 02:09:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:16.476286 | orchestrator | 2026-01-05 02:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:19.525799 | orchestrator | 2026-01-05 02:09:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:19.527470 | orchestrator | 2026-01-05 02:09:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:19.527536 | orchestrator | 2026-01-05 02:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:22.583855 | orchestrator | 2026-01-05 02:09:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:22.586239 | orchestrator | 2026-01-05 02:09:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:22.587187 | orchestrator | 2026-01-05 02:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:25.639238 | orchestrator | 2026-01-05 02:09:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:25.641095 | orchestrator | 2026-01-05 02:09:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:25.641186 | orchestrator | 2026-01-05 02:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:28.679513 | orchestrator | 2026-01-05 02:09:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:28.681133 | orchestrator | 2026-01-05 02:09:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:28.681196 | orchestrator | 2026-01-05 02:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:31.729711 | orchestrator | 2026-01-05 02:09:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:31.731660 | orchestrator | 2026-01-05 02:09:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:31.731718 | orchestrator | 2026-01-05 02:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:34.782477 | orchestrator | 2026-01-05 02:09:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:34.784776 | orchestrator | 2026-01-05 02:09:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:34.784870 | orchestrator | 2026-01-05 02:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:37.832756 | orchestrator | 2026-01-05 02:09:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:37.834930 | orchestrator | 2026-01-05 02:09:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:37.835018 | orchestrator | 2026-01-05 02:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:40.889228 | orchestrator | 2026-01-05 02:09:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:40.891528 | orchestrator | 2026-01-05 02:09:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:40.891625 | orchestrator | 2026-01-05 02:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:43.942485 | orchestrator | 2026-01-05 02:09:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:43.945720 | orchestrator | 2026-01-05 02:09:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:43.945806 | orchestrator | 2026-01-05 02:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:47.004223 | orchestrator | 2026-01-05 02:09:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:47.006898 | orchestrator | 2026-01-05 02:09:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:47.007189 | orchestrator | 2026-01-05 02:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:50.065458 | orchestrator | 2026-01-05 02:09:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:50.068077 | orchestrator | 2026-01-05 02:09:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:50.068142 | orchestrator | 2026-01-05 02:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:53.121281 | orchestrator | 2026-01-05 02:09:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:53.123786 | orchestrator | 2026-01-05 02:09:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:53.123861 | orchestrator | 2026-01-05 02:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:56.173090 | orchestrator | 2026-01-05 02:09:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:56.175599 | orchestrator | 2026-01-05 02:09:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:56.175633 | orchestrator | 2026-01-05 02:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:59.220138 | orchestrator | 2026-01-05 02:09:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:09:59.222483 | orchestrator | 2026-01-05 02:09:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:09:59.222618 | orchestrator | 2026-01-05 02:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:02.275121 | orchestrator | 2026-01-05 02:10:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:02.276413 | orchestrator | 2026-01-05 02:10:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:02.276489 | orchestrator | 2026-01-05 02:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:05.322479 | orchestrator | 2026-01-05 02:10:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:05.325267 | orchestrator | 2026-01-05 02:10:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:05.325382 | orchestrator | 2026-01-05 02:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:08.376353 | orchestrator | 2026-01-05 02:10:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:08.377881 | orchestrator | 2026-01-05 02:10:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:08.377936 | orchestrator | 2026-01-05 02:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:11.429945 | orchestrator | 2026-01-05 02:10:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:11.431166 | orchestrator | 2026-01-05 02:10:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:11.431212 | orchestrator | 2026-01-05 02:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:14.479920 | orchestrator | 2026-01-05 02:10:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:14.481963 | orchestrator | 2026-01-05 02:10:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:14.482086 | orchestrator | 2026-01-05 02:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:17.527448 | orchestrator | 2026-01-05 02:10:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:17.529338 | orchestrator | 2026-01-05 02:10:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:17.529401 | orchestrator | 2026-01-05 02:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:20.584448 | orchestrator | 2026-01-05 02:10:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:20.585555 | orchestrator | 2026-01-05 02:10:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:20.585597 | orchestrator | 2026-01-05 02:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:23.643950 | orchestrator | 2026-01-05 02:10:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:23.646666 | orchestrator | 2026-01-05 02:10:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:23.646728 | orchestrator | 2026-01-05 02:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:26.702340 | orchestrator | 2026-01-05 02:10:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:26.704084 | orchestrator | 2026-01-05 02:10:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:26.704226 | orchestrator | 2026-01-05 02:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:29.755553 | orchestrator | 2026-01-05 02:10:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:29.757096 | orchestrator | 2026-01-05 02:10:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:29.757177 | orchestrator | 2026-01-05 02:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:32.795001 | orchestrator | 2026-01-05 02:10:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:32.796783 | orchestrator | 2026-01-05 02:10:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:32.796833 | orchestrator | 2026-01-05 02:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:35.845815 | orchestrator | 2026-01-05 02:10:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:35.847783 | orchestrator | 2026-01-05 02:10:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:35.847858 | orchestrator | 2026-01-05 02:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:38.894492 | orchestrator | 2026-01-05 02:10:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:38.896874 | orchestrator | 2026-01-05 02:10:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:38.896944 | orchestrator | 2026-01-05 02:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:41.945824 | orchestrator | 2026-01-05 02:10:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:41.947400 | orchestrator | 2026-01-05 02:10:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:41.947460 | orchestrator | 2026-01-05 02:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:44.989128 | orchestrator | 2026-01-05 02:10:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:44.990862 | orchestrator | 2026-01-05 02:10:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:44.990909 | orchestrator | 2026-01-05 02:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:48.047043 | orchestrator | 2026-01-05 02:10:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:48.047136 | orchestrator | 2026-01-05 02:10:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:48.047148 | orchestrator | 2026-01-05 02:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:51.095950 | orchestrator | 2026-01-05 02:10:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:51.097886 | orchestrator | 2026-01-05 02:10:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:51.097956 | orchestrator | 2026-01-05 02:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:54.136875 | orchestrator | 2026-01-05 02:10:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:54.137905 | orchestrator | 2026-01-05 02:10:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:54.138067 | orchestrator | 2026-01-05 02:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:57.200500 | orchestrator | 2026-01-05 02:10:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:10:57.202191 | orchestrator | 2026-01-05 02:10:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:10:57.202238 | orchestrator | 2026-01-05 02:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:00.253456 | orchestrator | 2026-01-05 02:11:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:00.254794 | orchestrator | 2026-01-05 02:11:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:00.254841 | orchestrator | 2026-01-05 02:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:03.299509 | orchestrator | 2026-01-05 02:11:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:03.300045 | orchestrator | 2026-01-05 02:11:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:03.300104 | orchestrator | 2026-01-05 02:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:06.339684 | orchestrator | 2026-01-05 02:11:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:06.341144 | orchestrator | 2026-01-05 02:11:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:06.341231 | orchestrator | 2026-01-05 02:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:09.387700 | orchestrator | 2026-01-05 02:11:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:09.390115 | orchestrator | 2026-01-05 02:11:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:09.390878 | orchestrator | 2026-01-05 02:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:12.457386 | orchestrator | 2026-01-05 02:11:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:12.459594 | orchestrator | 2026-01-05 02:11:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:12.459671 | orchestrator | 2026-01-05 02:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:15.507762 | orchestrator | 2026-01-05 02:11:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:15.508866 | orchestrator | 2026-01-05 02:11:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:15.508915 | orchestrator | 2026-01-05 02:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:18.563958 | orchestrator | 2026-01-05 02:11:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:18.565602 | orchestrator | 2026-01-05 02:11:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:18.565690 | orchestrator | 2026-01-05 02:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:21.616775 | orchestrator | 2026-01-05 02:11:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:21.619247 | orchestrator | 2026-01-05 02:11:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:21.619422 | orchestrator | 2026-01-05 02:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:24.670872 | orchestrator | 2026-01-05 02:11:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:24.672985 | orchestrator | 2026-01-05 02:11:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:24.673113 | orchestrator | 2026-01-05 02:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:27.725795 | orchestrator | 2026-01-05 02:11:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:27.728413 | orchestrator | 2026-01-05 02:11:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:27.728468 | orchestrator | 2026-01-05 02:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:30.771025 | orchestrator | 2026-01-05 02:11:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:30.773088 | orchestrator | 2026-01-05 02:11:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:30.773143 | orchestrator | 2026-01-05 02:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:33.819228 | orchestrator | 2026-01-05 02:11:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:33.823640 | orchestrator | 2026-01-05 02:11:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:33.823715 | orchestrator | 2026-01-05 02:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:36.878814 | orchestrator | 2026-01-05 02:11:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:36.880947 | orchestrator | 2026-01-05 02:11:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:36.880997 | orchestrator | 2026-01-05 02:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:39.939925 | orchestrator | 2026-01-05 02:11:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:39.941527 | orchestrator | 2026-01-05 02:11:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:39.941580 | orchestrator | 2026-01-05 02:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:42.991024 | orchestrator | 2026-01-05 02:11:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:42.994294 | orchestrator | 2026-01-05 02:11:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:42.994354 | orchestrator | 2026-01-05 02:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:46.042627 | orchestrator | 2026-01-05 02:11:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:46.046782 | orchestrator | 2026-01-05 02:11:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:46.046886 | orchestrator | 2026-01-05 02:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:49.088409 | orchestrator | 2026-01-05 02:11:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:49.092492 | orchestrator | 2026-01-05 02:11:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:49.092571 | orchestrator | 2026-01-05 02:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:52.137977 | orchestrator | 2026-01-05 02:11:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:52.140583 | orchestrator | 2026-01-05 02:11:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:52.140654 | orchestrator | 2026-01-05 02:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:55.190145 | orchestrator | 2026-01-05 02:11:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:55.190544 | orchestrator | 2026-01-05 02:11:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:55.190572 | orchestrator | 2026-01-05 02:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:58.241353 | orchestrator | 2026-01-05 02:11:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:11:58.243742 | orchestrator | 2026-01-05 02:11:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:11:58.243844 | orchestrator | 2026-01-05 02:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:01.292578 | orchestrator | 2026-01-05 02:12:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:01.293652 | orchestrator | 2026-01-05 02:12:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:01.293729 | orchestrator | 2026-01-05 02:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:04.333797 | orchestrator | 2026-01-05 02:12:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:04.335770 | orchestrator | 2026-01-05 02:12:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:04.335821 | orchestrator | 2026-01-05 02:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:07.381790 | orchestrator | 2026-01-05 02:12:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:07.383024 | orchestrator | 2026-01-05 02:12:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:07.383079 | orchestrator | 2026-01-05 02:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:10.432981 | orchestrator | 2026-01-05 02:12:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:10.435056 | orchestrator | 2026-01-05 02:12:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:10.435182 | orchestrator | 2026-01-05 02:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:13.484083 | orchestrator | 2026-01-05 02:12:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:13.487010 | orchestrator | 2026-01-05 02:12:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:13.487162 | orchestrator | 2026-01-05 02:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:16.534440 | orchestrator | 2026-01-05 02:12:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:16.537140 | orchestrator | 2026-01-05 02:12:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:16.537240 | orchestrator | 2026-01-05 02:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:19.587054 | orchestrator | 2026-01-05 02:12:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:19.588952 | orchestrator | 2026-01-05 02:12:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:19.588984 | orchestrator | 2026-01-05 02:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:22.639074 | orchestrator | 2026-01-05 02:12:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:22.639756 | orchestrator | 2026-01-05 02:12:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:22.639808 | orchestrator | 2026-01-05 02:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:25.686674 | orchestrator | 2026-01-05 02:12:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:25.688444 | orchestrator | 2026-01-05 02:12:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:25.688526 | orchestrator | 2026-01-05 02:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:28.736380 | orchestrator | 2026-01-05 02:12:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:28.738722 | orchestrator | 2026-01-05 02:12:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:28.738820 | orchestrator | 2026-01-05 02:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:31.785753 | orchestrator | 2026-01-05 02:12:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:31.787349 | orchestrator | 2026-01-05 02:12:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:31.787429 | orchestrator | 2026-01-05 02:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:34.836431 | orchestrator | 2026-01-05 02:12:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:34.837995 | orchestrator | 2026-01-05 02:12:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:34.838102 | orchestrator | 2026-01-05 02:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:37.884000 | orchestrator | 2026-01-05 02:12:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:37.885865 | orchestrator | 2026-01-05 02:12:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:37.886111 | orchestrator | 2026-01-05 02:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:40.933607 | orchestrator | 2026-01-05 02:12:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:40.935266 | orchestrator | 2026-01-05 02:12:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:40.935353 | orchestrator | 2026-01-05 02:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:43.977162 | orchestrator | 2026-01-05 02:12:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:43.978608 | orchestrator | 2026-01-05 02:12:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:43.978681 | orchestrator | 2026-01-05 02:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:47.027151 | orchestrator | 2026-01-05 02:12:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:47.029707 | orchestrator | 2026-01-05 02:12:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:47.029781 | orchestrator | 2026-01-05 02:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:50.083663 | orchestrator | 2026-01-05 02:12:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:50.086337 | orchestrator | 2026-01-05 02:12:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:50.087663 | orchestrator | 2026-01-05 02:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:53.137992 | orchestrator | 2026-01-05 02:12:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:53.140976 | orchestrator | 2026-01-05 02:12:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:53.141092 | orchestrator | 2026-01-05 02:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:56.194922 | orchestrator | 2026-01-05 02:12:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:56.197454 | orchestrator | 2026-01-05 02:12:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:56.197507 | orchestrator | 2026-01-05 02:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:59.243802 | orchestrator | 2026-01-05 02:12:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:12:59.248501 | orchestrator | 2026-01-05 02:12:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:12:59.248567 | orchestrator | 2026-01-05 02:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:02.295693 | orchestrator | 2026-01-05 02:13:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:02.297411 | orchestrator | 2026-01-05 02:13:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:02.297519 | orchestrator | 2026-01-05 02:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:05.346734 | orchestrator | 2026-01-05 02:13:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:05.348365 | orchestrator | 2026-01-05 02:13:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:05.348431 | orchestrator | 2026-01-05 02:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:08.388115 | orchestrator | 2026-01-05 02:13:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:08.390161 | orchestrator | 2026-01-05 02:13:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:08.390680 | orchestrator | 2026-01-05 02:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:11.435128 | orchestrator | 2026-01-05 02:13:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:11.436718 | orchestrator | 2026-01-05 02:13:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:11.436759 | orchestrator | 2026-01-05 02:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:14.492393 | orchestrator | 2026-01-05 02:13:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:14.494494 | orchestrator | 2026-01-05 02:13:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:14.494570 | orchestrator | 2026-01-05 02:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:17.541101 | orchestrator | 2026-01-05 02:13:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:17.542825 | orchestrator | 2026-01-05 02:13:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:17.542861 | orchestrator | 2026-01-05 02:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:20.600690 | orchestrator | 2026-01-05 02:13:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:20.602603 | orchestrator | 2026-01-05 02:13:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:20.602648 | orchestrator | 2026-01-05 02:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:23.652263 | orchestrator | 2026-01-05 02:13:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:23.654881 | orchestrator | 2026-01-05 02:13:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:23.655012 | orchestrator | 2026-01-05 02:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:26.699319 | orchestrator | 2026-01-05 02:13:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:26.699465 | orchestrator | 2026-01-05 02:13:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:26.699479 | orchestrator | 2026-01-05 02:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:29.744791 | orchestrator | 2026-01-05 02:13:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:29.745292 | orchestrator | 2026-01-05 02:13:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:29.745332 | orchestrator | 2026-01-05 02:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:32.799740 | orchestrator | 2026-01-05 02:13:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:32.800849 | orchestrator | 2026-01-05 02:13:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:32.800883 | orchestrator | 2026-01-05 02:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:35.846688 | orchestrator | 2026-01-05 02:13:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:35.848867 | orchestrator | 2026-01-05 02:13:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:35.848953 | orchestrator | 2026-01-05 02:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:38.891572 | orchestrator | 2026-01-05 02:13:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:38.894504 | orchestrator | 2026-01-05 02:13:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:38.894593 | orchestrator | 2026-01-05 02:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:41.941795 | orchestrator | 2026-01-05 02:13:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:41.944225 | orchestrator | 2026-01-05 02:13:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:41.944285 | orchestrator | 2026-01-05 02:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:45.002564 | orchestrator | 2026-01-05 02:13:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:45.004171 | orchestrator | 2026-01-05 02:13:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:45.004294 | orchestrator | 2026-01-05 02:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:48.053239 | orchestrator | 2026-01-05 02:13:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:48.054420 | orchestrator | 2026-01-05 02:13:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:48.054483 | orchestrator | 2026-01-05 02:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:51.102588 | orchestrator | 2026-01-05 02:13:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:51.103895 | orchestrator | 2026-01-05 02:13:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:51.103950 | orchestrator | 2026-01-05 02:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:54.147208 | orchestrator | 2026-01-05 02:13:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:54.148971 | orchestrator | 2026-01-05 02:13:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:54.149020 | orchestrator | 2026-01-05 02:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:57.202545 | orchestrator | 2026-01-05 02:13:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:13:57.204253 | orchestrator | 2026-01-05 02:13:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:13:57.204302 | orchestrator | 2026-01-05 02:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:00.247369 | orchestrator | 2026-01-05 02:14:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:00.249235 | orchestrator | 2026-01-05 02:14:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:00.249319 | orchestrator | 2026-01-05 02:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:03.303638 | orchestrator | 2026-01-05 02:14:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:03.305758 | orchestrator | 2026-01-05 02:14:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:03.305880 | orchestrator | 2026-01-05 02:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:06.367097 | orchestrator | 2026-01-05 02:14:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:06.368440 | orchestrator | 2026-01-05 02:14:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:06.368457 | orchestrator | 2026-01-05 02:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:09.422810 | orchestrator | 2026-01-05 02:14:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:09.424494 | orchestrator | 2026-01-05 02:14:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:09.424552 | orchestrator | 2026-01-05 02:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:12.479186 | orchestrator | 2026-01-05 02:14:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:12.483394 | orchestrator | 2026-01-05 02:14:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:12.483454 | orchestrator | 2026-01-05 02:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:15.538880 | orchestrator | 2026-01-05 02:14:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:15.539669 | orchestrator | 2026-01-05 02:14:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:15.539946 | orchestrator | 2026-01-05 02:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:18.597545 | orchestrator | 2026-01-05 02:14:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:18.599433 | orchestrator | 2026-01-05 02:14:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:18.599475 | orchestrator | 2026-01-05 02:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:21.654296 | orchestrator | 2026-01-05 02:14:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:21.656291 | orchestrator | 2026-01-05 02:14:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:21.656344 | orchestrator | 2026-01-05 02:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:24.711518 | orchestrator | 2026-01-05 02:14:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:24.713205 | orchestrator | 2026-01-05 02:14:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:24.713376 | orchestrator | 2026-01-05 02:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:27.759277 | orchestrator | 2026-01-05 02:14:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:27.760826 | orchestrator | 2026-01-05 02:14:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:27.760976 | orchestrator | 2026-01-05 02:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:30.803099 | orchestrator | 2026-01-05 02:14:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:30.805941 | orchestrator | 2026-01-05 02:14:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:30.806096 | orchestrator | 2026-01-05 02:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:33.852319 | orchestrator | 2026-01-05 02:14:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:33.854587 | orchestrator | 2026-01-05 02:14:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:33.854735 | orchestrator | 2026-01-05 02:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:36.904342 | orchestrator | 2026-01-05 02:14:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:36.907753 | orchestrator | 2026-01-05 02:14:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:36.907817 | orchestrator | 2026-01-05 02:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:39.957659 | orchestrator | 2026-01-05 02:14:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:39.959281 | orchestrator | 2026-01-05 02:14:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:39.959472 | orchestrator | 2026-01-05 02:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:43.014597 | orchestrator | 2026-01-05 02:14:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:43.016582 | orchestrator | 2026-01-05 02:14:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:43.016786 | orchestrator | 2026-01-05 02:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:46.061544 | orchestrator | 2026-01-05 02:14:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:46.062496 | orchestrator | 2026-01-05 02:14:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:46.062534 | orchestrator | 2026-01-05 02:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:49.103958 | orchestrator | 2026-01-05 02:14:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:49.104335 | orchestrator | 2026-01-05 02:14:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:49.104352 | orchestrator | 2026-01-05 02:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:52.150976 | orchestrator | 2026-01-05 02:14:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:52.152213 | orchestrator | 2026-01-05 02:14:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:52.152253 | orchestrator | 2026-01-05 02:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:55.199781 | orchestrator | 2026-01-05 02:14:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:55.199859 | orchestrator | 2026-01-05 02:14:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:55.199866 | orchestrator | 2026-01-05 02:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:58.239519 | orchestrator | 2026-01-05 02:14:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:14:58.241962 | orchestrator | 2026-01-05 02:14:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:14:58.242076 | orchestrator | 2026-01-05 02:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:01.284815 | orchestrator | 2026-01-05 02:15:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:01.286270 | orchestrator | 2026-01-05 02:15:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:01.286327 | orchestrator | 2026-01-05 02:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:04.329810 | orchestrator | 2026-01-05 02:15:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:04.332493 | orchestrator | 2026-01-05 02:15:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:04.332633 | orchestrator | 2026-01-05 02:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:07.381549 | orchestrator | 2026-01-05 02:15:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:07.384298 | orchestrator | 2026-01-05 02:15:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:07.384384 | orchestrator | 2026-01-05 02:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:10.429218 | orchestrator | 2026-01-05 02:15:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:10.430711 | orchestrator | 2026-01-05 02:15:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:10.430768 | orchestrator | 2026-01-05 02:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:13.476861 | orchestrator | 2026-01-05 02:15:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:13.480035 | orchestrator | 2026-01-05 02:15:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:13.480181 | orchestrator | 2026-01-05 02:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:16.530241 | orchestrator | 2026-01-05 02:15:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:16.532655 | orchestrator | 2026-01-05 02:15:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:16.532740 | orchestrator | 2026-01-05 02:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:19.577024 | orchestrator | 2026-01-05 02:15:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:19.579230 | orchestrator | 2026-01-05 02:15:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:19.579344 | orchestrator | 2026-01-05 02:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:22.636055 | orchestrator | 2026-01-05 02:15:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:22.638871 | orchestrator | 2026-01-05 02:15:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:22.638924 | orchestrator | 2026-01-05 02:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:25.695684 | orchestrator | 2026-01-05 02:15:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:25.697453 | orchestrator | 2026-01-05 02:15:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:25.697501 | orchestrator | 2026-01-05 02:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:28.747939 | orchestrator | 2026-01-05 02:15:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:28.750789 | orchestrator | 2026-01-05 02:15:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:28.750857 | orchestrator | 2026-01-05 02:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:31.799405 | orchestrator | 2026-01-05 02:15:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:31.801819 | orchestrator | 2026-01-05 02:15:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:31.801880 | orchestrator | 2026-01-05 02:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:34.847259 | orchestrator | 2026-01-05 02:15:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:34.848959 | orchestrator | 2026-01-05 02:15:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:34.849036 | orchestrator | 2026-01-05 02:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:37.905519 | orchestrator | 2026-01-05 02:15:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:37.908113 | orchestrator | 2026-01-05 02:15:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:37.908193 | orchestrator | 2026-01-05 02:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:40.964957 | orchestrator | 2026-01-05 02:15:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:40.967242 | orchestrator | 2026-01-05 02:15:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:40.967295 | orchestrator | 2026-01-05 02:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:44.019675 | orchestrator | 2026-01-05 02:15:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:44.021707 | orchestrator | 2026-01-05 02:15:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:44.022491 | orchestrator | 2026-01-05 02:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:47.074259 | orchestrator | 2026-01-05 02:15:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:47.077255 | orchestrator | 2026-01-05 02:15:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:47.077341 | orchestrator | 2026-01-05 02:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:50.121624 | orchestrator | 2026-01-05 02:15:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:50.124547 | orchestrator | 2026-01-05 02:15:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:50.124624 | orchestrator | 2026-01-05 02:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:53.177736 | orchestrator | 2026-01-05 02:15:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:53.178891 | orchestrator | 2026-01-05 02:15:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:53.178937 | orchestrator | 2026-01-05 02:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:56.235712 | orchestrator | 2026-01-05 02:15:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:56.239688 | orchestrator | 2026-01-05 02:15:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:56.239787 | orchestrator | 2026-01-05 02:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:59.293215 | orchestrator | 2026-01-05 02:15:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:15:59.295714 | orchestrator | 2026-01-05 02:15:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:15:59.295827 | orchestrator | 2026-01-05 02:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:02.344845 | orchestrator | 2026-01-05 02:16:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:02.346190 | orchestrator | 2026-01-05 02:16:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:02.346302 | orchestrator | 2026-01-05 02:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:05.390966 | orchestrator | 2026-01-05 02:16:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:05.392963 | orchestrator | 2026-01-05 02:16:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:05.393056 | orchestrator | 2026-01-05 02:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:08.438270 | orchestrator | 2026-01-05 02:16:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:08.439526 | orchestrator | 2026-01-05 02:16:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:08.439570 | orchestrator | 2026-01-05 02:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:11.480616 | orchestrator | 2026-01-05 02:16:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:11.482836 | orchestrator | 2026-01-05 02:16:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:11.482904 | orchestrator | 2026-01-05 02:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:14.533686 | orchestrator | 2026-01-05 02:16:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:14.534438 | orchestrator | 2026-01-05 02:16:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:14.535284 | orchestrator | 2026-01-05 02:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:17.581079 | orchestrator | 2026-01-05 02:16:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:17.582724 | orchestrator | 2026-01-05 02:16:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:17.582776 | orchestrator | 2026-01-05 02:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:20.637193 | orchestrator | 2026-01-05 02:16:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:20.639673 | orchestrator | 2026-01-05 02:16:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:20.639726 | orchestrator | 2026-01-05 02:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:23.686325 | orchestrator | 2026-01-05 02:16:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:23.689153 | orchestrator | 2026-01-05 02:16:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:23.689226 | orchestrator | 2026-01-05 02:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:26.735242 | orchestrator | 2026-01-05 02:16:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:26.737472 | orchestrator | 2026-01-05 02:16:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:26.737547 | orchestrator | 2026-01-05 02:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:29.784714 | orchestrator | 2026-01-05 02:16:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:29.786924 | orchestrator | 2026-01-05 02:16:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:29.786999 | orchestrator | 2026-01-05 02:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:32.832125 | orchestrator | 2026-01-05 02:16:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:32.834388 | orchestrator | 2026-01-05 02:16:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:32.834441 | orchestrator | 2026-01-05 02:16:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:35.877674 | orchestrator | 2026-01-05 02:16:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:35.880457 | orchestrator | 2026-01-05 02:16:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:35.880544 | orchestrator | 2026-01-05 02:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:38.930213 | orchestrator | 2026-01-05 02:16:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:38.931753 | orchestrator | 2026-01-05 02:16:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:38.931813 | orchestrator | 2026-01-05 02:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:41.977325 | orchestrator | 2026-01-05 02:16:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:41.978144 | orchestrator | 2026-01-05 02:16:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:41.978182 | orchestrator | 2026-01-05 02:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:45.012031 | orchestrator | 2026-01-05 02:16:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:45.014242 | orchestrator | 2026-01-05 02:16:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:45.014285 | orchestrator | 2026-01-05 02:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:48.055767 | orchestrator | 2026-01-05 02:16:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:48.057871 | orchestrator | 2026-01-05 02:16:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:48.057935 | orchestrator | 2026-01-05 02:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:51.107250 | orchestrator | 2026-01-05 02:16:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:51.107987 | orchestrator | 2026-01-05 02:16:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:51.108033 | orchestrator | 2026-01-05 02:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:54.154597 | orchestrator | 2026-01-05 02:16:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:54.156855 | orchestrator | 2026-01-05 02:16:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:54.156961 | orchestrator | 2026-01-05 02:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:57.214217 | orchestrator | 2026-01-05 02:16:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:16:57.215805 | orchestrator | 2026-01-05 02:16:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:16:57.215902 | orchestrator | 2026-01-05 02:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:00.251085 | orchestrator | 2026-01-05 02:17:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:00.253650 | orchestrator | 2026-01-05 02:17:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:00.253729 | orchestrator | 2026-01-05 02:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:03.302391 | orchestrator | 2026-01-05 02:17:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:03.303590 | orchestrator | 2026-01-05 02:17:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:03.303820 | orchestrator | 2026-01-05 02:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:06.353939 | orchestrator | 2026-01-05 02:17:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:06.356434 | orchestrator | 2026-01-05 02:17:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:06.356625 | orchestrator | 2026-01-05 02:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:09.406096 | orchestrator | 2026-01-05 02:17:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:09.407565 | orchestrator | 2026-01-05 02:17:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:09.407615 | orchestrator | 2026-01-05 02:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:12.465140 | orchestrator | 2026-01-05 02:17:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:12.467207 | orchestrator | 2026-01-05 02:17:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:12.467506 | orchestrator | 2026-01-05 02:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:15.509341 | orchestrator | 2026-01-05 02:17:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:15.511150 | orchestrator | 2026-01-05 02:17:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:15.511221 | orchestrator | 2026-01-05 02:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:18.556794 | orchestrator | 2026-01-05 02:17:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:18.559904 | orchestrator | 2026-01-05 02:17:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:18.559958 | orchestrator | 2026-01-05 02:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:21.603325 | orchestrator | 2026-01-05 02:17:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:21.605146 | orchestrator | 2026-01-05 02:17:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:21.605212 | orchestrator | 2026-01-05 02:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:24.651552 | orchestrator | 2026-01-05 02:17:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:24.654357 | orchestrator | 2026-01-05 02:17:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:24.654435 | orchestrator | 2026-01-05 02:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:27.698300 | orchestrator | 2026-01-05 02:17:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:27.698888 | orchestrator | 2026-01-05 02:17:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:27.698998 | orchestrator | 2026-01-05 02:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:30.740134 | orchestrator | 2026-01-05 02:17:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:30.742081 | orchestrator | 2026-01-05 02:17:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:30.742113 | orchestrator | 2026-01-05 02:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:33.782845 | orchestrator | 2026-01-05 02:17:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:33.784325 | orchestrator | 2026-01-05 02:17:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:33.784397 | orchestrator | 2026-01-05 02:17:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:36.833342 | orchestrator | 2026-01-05 02:17:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:36.835811 | orchestrator | 2026-01-05 02:17:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:36.835898 | orchestrator | 2026-01-05 02:17:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:39.882636 | orchestrator | 2026-01-05 02:17:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:39.883830 | orchestrator | 2026-01-05 02:17:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:39.883869 | orchestrator | 2026-01-05 02:17:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:42.936963 | orchestrator | 2026-01-05 02:17:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:42.937593 | orchestrator | 2026-01-05 02:17:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:42.937738 | orchestrator | 2026-01-05 02:17:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:45.991151 | orchestrator | 2026-01-05 02:17:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:45.994154 | orchestrator | 2026-01-05 02:17:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:45.994275 | orchestrator | 2026-01-05 02:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:49.057608 | orchestrator | 2026-01-05 02:17:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:49.060014 | orchestrator | 2026-01-05 02:17:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:49.060257 | orchestrator | 2026-01-05 02:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:52.100443 | orchestrator | 2026-01-05 02:17:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:52.101309 | orchestrator | 2026-01-05 02:17:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:52.101356 | orchestrator | 2026-01-05 02:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:55.152199 | orchestrator | 2026-01-05 02:17:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:55.155364 | orchestrator | 2026-01-05 02:17:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:55.155430 | orchestrator | 2026-01-05 02:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:58.210489 | orchestrator | 2026-01-05 02:17:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:17:58.210881 | orchestrator | 2026-01-05 02:17:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:17:58.210920 | orchestrator | 2026-01-05 02:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:01.262260 | orchestrator | 2026-01-05 02:18:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:01.266103 | orchestrator | 2026-01-05 02:18:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:01.266214 | orchestrator | 2026-01-05 02:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:04.316397 | orchestrator | 2026-01-05 02:18:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:04.317280 | orchestrator | 2026-01-05 02:18:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:04.317339 | orchestrator | 2026-01-05 02:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:07.373389 | orchestrator | 2026-01-05 02:18:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:07.376294 | orchestrator | 2026-01-05 02:18:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:07.376348 | orchestrator | 2026-01-05 02:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:10.435708 | orchestrator | 2026-01-05 02:18:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:10.438217 | orchestrator | 2026-01-05 02:18:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:10.438277 | orchestrator | 2026-01-05 02:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:13.496307 | orchestrator | 2026-01-05 02:18:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:13.498432 | orchestrator | 2026-01-05 02:18:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:13.498520 | orchestrator | 2026-01-05 02:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:16.556339 | orchestrator | 2026-01-05 02:18:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:16.559637 | orchestrator | 2026-01-05 02:18:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:16.559705 | orchestrator | 2026-01-05 02:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:19.612553 | orchestrator | 2026-01-05 02:18:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:19.615181 | orchestrator | 2026-01-05 02:18:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:19.615250 | orchestrator | 2026-01-05 02:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:22.662427 | orchestrator | 2026-01-05 02:18:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:22.663549 | orchestrator | 2026-01-05 02:18:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:22.663796 | orchestrator | 2026-01-05 02:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:25.718953 | orchestrator | 2026-01-05 02:18:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:25.721252 | orchestrator | 2026-01-05 02:18:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:25.721349 | orchestrator | 2026-01-05 02:18:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:28.771866 | orchestrator | 2026-01-05 02:18:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:28.773737 | orchestrator | 2026-01-05 02:18:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:28.773807 | orchestrator | 2026-01-05 02:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:31.827878 | orchestrator | 2026-01-05 02:18:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:31.830853 | orchestrator | 2026-01-05 02:18:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:31.831096 | orchestrator | 2026-01-05 02:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:34.875647 | orchestrator | 2026-01-05 02:18:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:34.877406 | orchestrator | 2026-01-05 02:18:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:34.877460 | orchestrator | 2026-01-05 02:18:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:37.934410 | orchestrator | 2026-01-05 02:18:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:37.936435 | orchestrator | 2026-01-05 02:18:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:37.936561 | orchestrator | 2026-01-05 02:18:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:40.986717 | orchestrator | 2026-01-05 02:18:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:40.988253 | orchestrator | 2026-01-05 02:18:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:40.988316 | orchestrator | 2026-01-05 02:18:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:44.039095 | orchestrator | 2026-01-05 02:18:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:44.041099 | orchestrator | 2026-01-05 02:18:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:44.041183 | orchestrator | 2026-01-05 02:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:47.089285 | orchestrator | 2026-01-05 02:18:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:47.091363 | orchestrator | 2026-01-05 02:18:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:47.091526 | orchestrator | 2026-01-05 02:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:50.137073 | orchestrator | 2026-01-05 02:18:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:50.138760 | orchestrator | 2026-01-05 02:18:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:50.138802 | orchestrator | 2026-01-05 02:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:53.185174 | orchestrator | 2026-01-05 02:18:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:53.186963 | orchestrator | 2026-01-05 02:18:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:53.187035 | orchestrator | 2026-01-05 02:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:56.231351 | orchestrator | 2026-01-05 02:18:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:56.232470 | orchestrator | 2026-01-05 02:18:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:56.232687 | orchestrator | 2026-01-05 02:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:59.282397 | orchestrator | 2026-01-05 02:18:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:18:59.284103 | orchestrator | 2026-01-05 02:18:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:18:59.284205 | orchestrator | 2026-01-05 02:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:02.329875 | orchestrator | 2026-01-05 02:19:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:02.331776 | orchestrator | 2026-01-05 02:19:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:02.331856 | orchestrator | 2026-01-05 02:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:05.370859 | orchestrator | 2026-01-05 02:19:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:05.370982 | orchestrator | 2026-01-05 02:19:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:05.371042 | orchestrator | 2026-01-05 02:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:08.417204 | orchestrator | 2026-01-05 02:19:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:08.419449 | orchestrator | 2026-01-05 02:19:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:08.419527 | orchestrator | 2026-01-05 02:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:11.468504 | orchestrator | 2026-01-05 02:19:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:11.470463 | orchestrator | 2026-01-05 02:19:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:11.470506 | orchestrator | 2026-01-05 02:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:14.520227 | orchestrator | 2026-01-05 02:19:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:14.522373 | orchestrator | 2026-01-05 02:19:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:14.522560 | orchestrator | 2026-01-05 02:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:17.569073 | orchestrator | 2026-01-05 02:19:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:17.570309 | orchestrator | 2026-01-05 02:19:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:17.570371 | orchestrator | 2026-01-05 02:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:20.624149 | orchestrator | 2026-01-05 02:19:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:20.626083 | orchestrator | 2026-01-05 02:19:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:20.626216 | orchestrator | 2026-01-05 02:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:23.670628 | orchestrator | 2026-01-05 02:19:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:23.672906 | orchestrator | 2026-01-05 02:19:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:23.673013 | orchestrator | 2026-01-05 02:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:26.719120 | orchestrator | 2026-01-05 02:19:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:26.720546 | orchestrator | 2026-01-05 02:19:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:26.720584 | orchestrator | 2026-01-05 02:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:29.772873 | orchestrator | 2026-01-05 02:19:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:29.774296 | orchestrator | 2026-01-05 02:19:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:29.774384 | orchestrator | 2026-01-05 02:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:32.823477 | orchestrator | 2026-01-05 02:19:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:32.825594 | orchestrator | 2026-01-05 02:19:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:32.825733 | orchestrator | 2026-01-05 02:19:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:35.875052 | orchestrator | 2026-01-05 02:19:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:35.877074 | orchestrator | 2026-01-05 02:19:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:35.877147 | orchestrator | 2026-01-05 02:19:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:38.924360 | orchestrator | 2026-01-05 02:19:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:38.926325 | orchestrator | 2026-01-05 02:19:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:38.926382 | orchestrator | 2026-01-05 02:19:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:41.974335 | orchestrator | 2026-01-05 02:19:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:41.976776 | orchestrator | 2026-01-05 02:19:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:41.976850 | orchestrator | 2026-01-05 02:19:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:45.016717 | orchestrator | 2026-01-05 02:19:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:45.017485 | orchestrator | 2026-01-05 02:19:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:45.017615 | orchestrator | 2026-01-05 02:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:48.065638 | orchestrator | 2026-01-05 02:19:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:48.067940 | orchestrator | 2026-01-05 02:19:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:48.068058 | orchestrator | 2026-01-05 02:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:51.112551 | orchestrator | 2026-01-05 02:19:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:51.114416 | orchestrator | 2026-01-05 02:19:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:51.114489 | orchestrator | 2026-01-05 02:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:54.166205 | orchestrator | 2026-01-05 02:19:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:54.168868 | orchestrator | 2026-01-05 02:19:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:54.169067 | orchestrator | 2026-01-05 02:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:57.212297 | orchestrator | 2026-01-05 02:19:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:19:57.214692 | orchestrator | 2026-01-05 02:19:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:19:57.214803 | orchestrator | 2026-01-05 02:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:00.260680 | orchestrator | 2026-01-05 02:20:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:00.265621 | orchestrator | 2026-01-05 02:20:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:00.265690 | orchestrator | 2026-01-05 02:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:03.325464 | orchestrator | 2026-01-05 02:20:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:03.328111 | orchestrator | 2026-01-05 02:20:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:03.328222 | orchestrator | 2026-01-05 02:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:06.373028 | orchestrator | 2026-01-05 02:20:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:06.375295 | orchestrator | 2026-01-05 02:20:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:06.375335 | orchestrator | 2026-01-05 02:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:09.427028 | orchestrator | 2026-01-05 02:20:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:09.427894 | orchestrator | 2026-01-05 02:20:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:09.427954 | orchestrator | 2026-01-05 02:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:12.478864 | orchestrator | 2026-01-05 02:20:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:12.479852 | orchestrator | 2026-01-05 02:20:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:12.479901 | orchestrator | 2026-01-05 02:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:15.526637 | orchestrator | 2026-01-05 02:20:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:15.527033 | orchestrator | 2026-01-05 02:20:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:15.527291 | orchestrator | 2026-01-05 02:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:18.579421 | orchestrator | 2026-01-05 02:20:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:18.581180 | orchestrator | 2026-01-05 02:20:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:18.581339 | orchestrator | 2026-01-05 02:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:21.626082 | orchestrator | 2026-01-05 02:20:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:21.627201 | orchestrator | 2026-01-05 02:20:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:21.627246 | orchestrator | 2026-01-05 02:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:24.671135 | orchestrator | 2026-01-05 02:20:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:24.673579 | orchestrator | 2026-01-05 02:20:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:24.673661 | orchestrator | 2026-01-05 02:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:27.725418 | orchestrator | 2026-01-05 02:20:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:27.728133 | orchestrator | 2026-01-05 02:20:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:27.728295 | orchestrator | 2026-01-05 02:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:30.778047 | orchestrator | 2026-01-05 02:20:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:30.781029 | orchestrator | 2026-01-05 02:20:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:30.781088 | orchestrator | 2026-01-05 02:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:33.833766 | orchestrator | 2026-01-05 02:20:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:33.835414 | orchestrator | 2026-01-05 02:20:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:33.835467 | orchestrator | 2026-01-05 02:20:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:36.889148 | orchestrator | 2026-01-05 02:20:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:36.891435 | orchestrator | 2026-01-05 02:20:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:36.891498 | orchestrator | 2026-01-05 02:20:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:39.944278 | orchestrator | 2026-01-05 02:20:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:39.946597 | orchestrator | 2026-01-05 02:20:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:39.946692 | orchestrator | 2026-01-05 02:20:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:42.992359 | orchestrator | 2026-01-05 02:20:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:42.994705 | orchestrator | 2026-01-05 02:20:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:42.994760 | orchestrator | 2026-01-05 02:20:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:46.045345 | orchestrator | 2026-01-05 02:20:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:46.046223 | orchestrator | 2026-01-05 02:20:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:46.046279 | orchestrator | 2026-01-05 02:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:49.093532 | orchestrator | 2026-01-05 02:20:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:49.095417 | orchestrator | 2026-01-05 02:20:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:49.095492 | orchestrator | 2026-01-05 02:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:52.142075 | orchestrator | 2026-01-05 02:20:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:52.144666 | orchestrator | 2026-01-05 02:20:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:52.144733 | orchestrator | 2026-01-05 02:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:55.196454 | orchestrator | 2026-01-05 02:20:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:55.197846 | orchestrator | 2026-01-05 02:20:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:55.197987 | orchestrator | 2026-01-05 02:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:58.244562 | orchestrator | 2026-01-05 02:20:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:20:58.246897 | orchestrator | 2026-01-05 02:20:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:20:58.247006 | orchestrator | 2026-01-05 02:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:01.288174 | orchestrator | 2026-01-05 02:21:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:01.288893 | orchestrator | 2026-01-05 02:21:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:01.288999 | orchestrator | 2026-01-05 02:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:04.333507 | orchestrator | 2026-01-05 02:21:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:04.336352 | orchestrator | 2026-01-05 02:21:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:04.336435 | orchestrator | 2026-01-05 02:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:07.384799 | orchestrator | 2026-01-05 02:21:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:07.386539 | orchestrator | 2026-01-05 02:21:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:07.386630 | orchestrator | 2026-01-05 02:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:10.434175 | orchestrator | 2026-01-05 02:21:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:10.435804 | orchestrator | 2026-01-05 02:21:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:10.435857 | orchestrator | 2026-01-05 02:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:13.485107 | orchestrator | 2026-01-05 02:21:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:13.487403 | orchestrator | 2026-01-05 02:21:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:13.487475 | orchestrator | 2026-01-05 02:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:16.530871 | orchestrator | 2026-01-05 02:21:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:16.533314 | orchestrator | 2026-01-05 02:21:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:16.533492 | orchestrator | 2026-01-05 02:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:19.590650 | orchestrator | 2026-01-05 02:21:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:19.593098 | orchestrator | 2026-01-05 02:21:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:19.593267 | orchestrator | 2026-01-05 02:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:22.646102 | orchestrator | 2026-01-05 02:21:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:22.647874 | orchestrator | 2026-01-05 02:21:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:22.647992 | orchestrator | 2026-01-05 02:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:25.693288 | orchestrator | 2026-01-05 02:21:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:25.694578 | orchestrator | 2026-01-05 02:21:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:25.694631 | orchestrator | 2026-01-05 02:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:28.747382 | orchestrator | 2026-01-05 02:21:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:28.749720 | orchestrator | 2026-01-05 02:21:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:28.749770 | orchestrator | 2026-01-05 02:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:31.803430 | orchestrator | 2026-01-05 02:21:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:31.804749 | orchestrator | 2026-01-05 02:21:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:31.804838 | orchestrator | 2026-01-05 02:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:34.858848 | orchestrator | 2026-01-05 02:21:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:34.860246 | orchestrator | 2026-01-05 02:21:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:34.860308 | orchestrator | 2026-01-05 02:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:37.903815 | orchestrator | 2026-01-05 02:21:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:37.905987 | orchestrator | 2026-01-05 02:21:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:37.906088 | orchestrator | 2026-01-05 02:21:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:40.974126 | orchestrator | 2026-01-05 02:21:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:40.974266 | orchestrator | 2026-01-05 02:21:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:40.974283 | orchestrator | 2026-01-05 02:21:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:44.043246 | orchestrator | 2026-01-05 02:21:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:44.045216 | orchestrator | 2026-01-05 02:21:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:44.045277 | orchestrator | 2026-01-05 02:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:47.097198 | orchestrator | 2026-01-05 02:21:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:47.098880 | orchestrator | 2026-01-05 02:21:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:47.098955 | orchestrator | 2026-01-05 02:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:50.149052 | orchestrator | 2026-01-05 02:21:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:50.153890 | orchestrator | 2026-01-05 02:21:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:50.154153 | orchestrator | 2026-01-05 02:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:53.199420 | orchestrator | 2026-01-05 02:21:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:53.200584 | orchestrator | 2026-01-05 02:21:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:53.200688 | orchestrator | 2026-01-05 02:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:56.268405 | orchestrator | 2026-01-05 02:21:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:56.269180 | orchestrator | 2026-01-05 02:21:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:56.269254 | orchestrator | 2026-01-05 02:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:59.319397 | orchestrator | 2026-01-05 02:21:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:21:59.320837 | orchestrator | 2026-01-05 02:21:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:21:59.320891 | orchestrator | 2026-01-05 02:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:02.366589 | orchestrator | 2026-01-05 02:22:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:02.368502 | orchestrator | 2026-01-05 02:22:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:02.368581 | orchestrator | 2026-01-05 02:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:05.416912 | orchestrator | 2026-01-05 02:22:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:05.418558 | orchestrator | 2026-01-05 02:22:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:05.418600 | orchestrator | 2026-01-05 02:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:08.463002 | orchestrator | 2026-01-05 02:22:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:08.465305 | orchestrator | 2026-01-05 02:22:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:08.465388 | orchestrator | 2026-01-05 02:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:11.506486 | orchestrator | 2026-01-05 02:22:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:11.507288 | orchestrator | 2026-01-05 02:22:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:11.507334 | orchestrator | 2026-01-05 02:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:14.559537 | orchestrator | 2026-01-05 02:22:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:14.561307 | orchestrator | 2026-01-05 02:22:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:14.561344 | orchestrator | 2026-01-05 02:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:17.611259 | orchestrator | 2026-01-05 02:22:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:17.612601 | orchestrator | 2026-01-05 02:22:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:17.612646 | orchestrator | 2026-01-05 02:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:20.669316 | orchestrator | 2026-01-05 02:22:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:20.671043 | orchestrator | 2026-01-05 02:22:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:20.671079 | orchestrator | 2026-01-05 02:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:23.715904 | orchestrator | 2026-01-05 02:22:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:23.718206 | orchestrator | 2026-01-05 02:22:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:23.718354 | orchestrator | 2026-01-05 02:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:26.768858 | orchestrator | 2026-01-05 02:22:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:26.771179 | orchestrator | 2026-01-05 02:22:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:26.771292 | orchestrator | 2026-01-05 02:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:29.819030 | orchestrator | 2026-01-05 02:22:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:29.820573 | orchestrator | 2026-01-05 02:22:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:29.820627 | orchestrator | 2026-01-05 02:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:32.874133 | orchestrator | 2026-01-05 02:22:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:32.875321 | orchestrator | 2026-01-05 02:22:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:32.875374 | orchestrator | 2026-01-05 02:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:35.925085 | orchestrator | 2026-01-05 02:22:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:35.926993 | orchestrator | 2026-01-05 02:22:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:35.927046 | orchestrator | 2026-01-05 02:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:38.974349 | orchestrator | 2026-01-05 02:22:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:38.976374 | orchestrator | 2026-01-05 02:22:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:38.976455 | orchestrator | 2026-01-05 02:22:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:42.026250 | orchestrator | 2026-01-05 02:22:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:42.027398 | orchestrator | 2026-01-05 02:22:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:42.027442 | orchestrator | 2026-01-05 02:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:45.068277 | orchestrator | 2026-01-05 02:22:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:45.069100 | orchestrator | 2026-01-05 02:22:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:45.069139 | orchestrator | 2026-01-05 02:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:48.113061 | orchestrator | 2026-01-05 02:22:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:48.113278 | orchestrator | 2026-01-05 02:22:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:48.113424 | orchestrator | 2026-01-05 02:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:51.162270 | orchestrator | 2026-01-05 02:22:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:51.164082 | orchestrator | 2026-01-05 02:22:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:51.164148 | orchestrator | 2026-01-05 02:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:54.210313 | orchestrator | 2026-01-05 02:22:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:54.211216 | orchestrator | 2026-01-05 02:22:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:54.211247 | orchestrator | 2026-01-05 02:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:57.258280 | orchestrator | 2026-01-05 02:22:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:22:57.258505 | orchestrator | 2026-01-05 02:22:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:22:57.258523 | orchestrator | 2026-01-05 02:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:00.305236 | orchestrator | 2026-01-05 02:23:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:00.308285 | orchestrator | 2026-01-05 02:23:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:00.308353 | orchestrator | 2026-01-05 02:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:03.362860 | orchestrator | 2026-01-05 02:23:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:03.364524 | orchestrator | 2026-01-05 02:23:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:03.364662 | orchestrator | 2026-01-05 02:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:06.408929 | orchestrator | 2026-01-05 02:23:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:06.409668 | orchestrator | 2026-01-05 02:23:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:06.409686 | orchestrator | 2026-01-05 02:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:09.462743 | orchestrator | 2026-01-05 02:23:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:09.463709 | orchestrator | 2026-01-05 02:23:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:09.463827 | orchestrator | 2026-01-05 02:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:12.512669 | orchestrator | 2026-01-05 02:23:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:12.514811 | orchestrator | 2026-01-05 02:23:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:12.514941 | orchestrator | 2026-01-05 02:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:15.566066 | orchestrator | 2026-01-05 02:23:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:15.567614 | orchestrator | 2026-01-05 02:23:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:15.567703 | orchestrator | 2026-01-05 02:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:18.616773 | orchestrator | 2026-01-05 02:23:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:18.617939 | orchestrator | 2026-01-05 02:23:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:18.617999 | orchestrator | 2026-01-05 02:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:21.664684 | orchestrator | 2026-01-05 02:23:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:21.665174 | orchestrator | 2026-01-05 02:23:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:21.665203 | orchestrator | 2026-01-05 02:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:24.708588 | orchestrator | 2026-01-05 02:23:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:24.710120 | orchestrator | 2026-01-05 02:23:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:24.710163 | orchestrator | 2026-01-05 02:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:27.754069 | orchestrator | 2026-01-05 02:23:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:27.755204 | orchestrator | 2026-01-05 02:23:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:27.755401 | orchestrator | 2026-01-05 02:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:30.802969 | orchestrator | 2026-01-05 02:23:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:30.805317 | orchestrator | 2026-01-05 02:23:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:30.805393 | orchestrator | 2026-01-05 02:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:33.848981 | orchestrator | 2026-01-05 02:23:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:33.850599 | orchestrator | 2026-01-05 02:23:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:33.850641 | orchestrator | 2026-01-05 02:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:36.895829 | orchestrator | 2026-01-05 02:23:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:36.898268 | orchestrator | 2026-01-05 02:23:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:36.898326 | orchestrator | 2026-01-05 02:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:39.948938 | orchestrator | 2026-01-05 02:23:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:39.950574 | orchestrator | 2026-01-05 02:23:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:39.950636 | orchestrator | 2026-01-05 02:23:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:42.990396 | orchestrator | 2026-01-05 02:23:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:42.992400 | orchestrator | 2026-01-05 02:23:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:42.992435 | orchestrator | 2026-01-05 02:23:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:46.045769 | orchestrator | 2026-01-05 02:23:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:46.047525 | orchestrator | 2026-01-05 02:23:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:46.047591 | orchestrator | 2026-01-05 02:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:49.088085 | orchestrator | 2026-01-05 02:23:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:49.089951 | orchestrator | 2026-01-05 02:23:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:49.090128 | orchestrator | 2026-01-05 02:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:52.141397 | orchestrator | 2026-01-05 02:23:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:52.142137 | orchestrator | 2026-01-05 02:23:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:52.142193 | orchestrator | 2026-01-05 02:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:55.190482 | orchestrator | 2026-01-05 02:23:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:55.191519 | orchestrator | 2026-01-05 02:23:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:55.191719 | orchestrator | 2026-01-05 02:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:58.243134 | orchestrator | 2026-01-05 02:23:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:23:58.244874 | orchestrator | 2026-01-05 02:23:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:23:58.244970 | orchestrator | 2026-01-05 02:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:01.299961 | orchestrator | 2026-01-05 02:24:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:01.301819 | orchestrator | 2026-01-05 02:24:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:01.301902 | orchestrator | 2026-01-05 02:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:04.344601 | orchestrator | 2026-01-05 02:24:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:04.345613 | orchestrator | 2026-01-05 02:24:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:04.345679 | orchestrator | 2026-01-05 02:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:07.393115 | orchestrator | 2026-01-05 02:24:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:07.394495 | orchestrator | 2026-01-05 02:24:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:07.394543 | orchestrator | 2026-01-05 02:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:10.449455 | orchestrator | 2026-01-05 02:24:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:10.451233 | orchestrator | 2026-01-05 02:24:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:10.451290 | orchestrator | 2026-01-05 02:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:13.499649 | orchestrator | 2026-01-05 02:24:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:13.501466 | orchestrator | 2026-01-05 02:24:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:13.501507 | orchestrator | 2026-01-05 02:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:16.554607 | orchestrator | 2026-01-05 02:24:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:16.555819 | orchestrator | 2026-01-05 02:24:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:16.555924 | orchestrator | 2026-01-05 02:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:19.613604 | orchestrator | 2026-01-05 02:24:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:19.616521 | orchestrator | 2026-01-05 02:24:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:19.616601 | orchestrator | 2026-01-05 02:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:22.658783 | orchestrator | 2026-01-05 02:24:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:22.660303 | orchestrator | 2026-01-05 02:24:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:22.660422 | orchestrator | 2026-01-05 02:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:25.719731 | orchestrator | 2026-01-05 02:24:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:25.722612 | orchestrator | 2026-01-05 02:24:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:25.722714 | orchestrator | 2026-01-05 02:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:28.776532 | orchestrator | 2026-01-05 02:24:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:28.776849 | orchestrator | 2026-01-05 02:24:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:28.777017 | orchestrator | 2026-01-05 02:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:31.821943 | orchestrator | 2026-01-05 02:24:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:31.824574 | orchestrator | 2026-01-05 02:24:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:31.824704 | orchestrator | 2026-01-05 02:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:34.868471 | orchestrator | 2026-01-05 02:24:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:34.869498 | orchestrator | 2026-01-05 02:24:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:34.869553 | orchestrator | 2026-01-05 02:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:37.910996 | orchestrator | 2026-01-05 02:24:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:37.911804 | orchestrator | 2026-01-05 02:24:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:37.911846 | orchestrator | 2026-01-05 02:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:40.961078 | orchestrator | 2026-01-05 02:24:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:40.963033 | orchestrator | 2026-01-05 02:24:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:40.963101 | orchestrator | 2026-01-05 02:24:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:44.006264 | orchestrator | 2026-01-05 02:24:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:44.007512 | orchestrator | 2026-01-05 02:24:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:44.007589 | orchestrator | 2026-01-05 02:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:47.060283 | orchestrator | 2026-01-05 02:24:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:47.061783 | orchestrator | 2026-01-05 02:24:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:47.061910 | orchestrator | 2026-01-05 02:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:50.104484 | orchestrator | 2026-01-05 02:24:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:50.107084 | orchestrator | 2026-01-05 02:24:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:50.107218 | orchestrator | 2026-01-05 02:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:53.159360 | orchestrator | 2026-01-05 02:24:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:53.160979 | orchestrator | 2026-01-05 02:24:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:53.161066 | orchestrator | 2026-01-05 02:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:56.208091 | orchestrator | 2026-01-05 02:24:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:56.209221 | orchestrator | 2026-01-05 02:24:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:56.209366 | orchestrator | 2026-01-05 02:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:59.256337 | orchestrator | 2026-01-05 02:24:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:24:59.257143 | orchestrator | 2026-01-05 02:24:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:24:59.257429 | orchestrator | 2026-01-05 02:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:02.302928 | orchestrator | 2026-01-05 02:25:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:02.305636 | orchestrator | 2026-01-05 02:25:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:02.305735 | orchestrator | 2026-01-05 02:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:05.349421 | orchestrator | 2026-01-05 02:25:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:05.350973 | orchestrator | 2026-01-05 02:25:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:05.351015 | orchestrator | 2026-01-05 02:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:08.399509 | orchestrator | 2026-01-05 02:25:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:08.401772 | orchestrator | 2026-01-05 02:25:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:08.401832 | orchestrator | 2026-01-05 02:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:11.450554 | orchestrator | 2026-01-05 02:25:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:11.454374 | orchestrator | 2026-01-05 02:25:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:11.454433 | orchestrator | 2026-01-05 02:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:14.493390 | orchestrator | 2026-01-05 02:25:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:14.495399 | orchestrator | 2026-01-05 02:25:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:14.495476 | orchestrator | 2026-01-05 02:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:17.534525 | orchestrator | 2026-01-05 02:25:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:17.536167 | orchestrator | 2026-01-05 02:25:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:17.536231 | orchestrator | 2026-01-05 02:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:20.587321 | orchestrator | 2026-01-05 02:25:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:20.588255 | orchestrator | 2026-01-05 02:25:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:20.588311 | orchestrator | 2026-01-05 02:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:23.635065 | orchestrator | 2026-01-05 02:25:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:23.636237 | orchestrator | 2026-01-05 02:25:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:23.636290 | orchestrator | 2026-01-05 02:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:26.689495 | orchestrator | 2026-01-05 02:25:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:26.693275 | orchestrator | 2026-01-05 02:25:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:26.693351 | orchestrator | 2026-01-05 02:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:29.748750 | orchestrator | 2026-01-05 02:25:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:29.750643 | orchestrator | 2026-01-05 02:25:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:29.750748 | orchestrator | 2026-01-05 02:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:32.797498 | orchestrator | 2026-01-05 02:25:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:32.800146 | orchestrator | 2026-01-05 02:25:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:32.800252 | orchestrator | 2026-01-05 02:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:35.855163 | orchestrator | 2026-01-05 02:25:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:35.855646 | orchestrator | 2026-01-05 02:25:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:35.855694 | orchestrator | 2026-01-05 02:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:38.905488 | orchestrator | 2026-01-05 02:25:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:38.908202 | orchestrator | 2026-01-05 02:25:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:38.908459 | orchestrator | 2026-01-05 02:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:41.955345 | orchestrator | 2026-01-05 02:25:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:41.956818 | orchestrator | 2026-01-05 02:25:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:41.956915 | orchestrator | 2026-01-05 02:25:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:45.004354 | orchestrator | 2026-01-05 02:25:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:45.006880 | orchestrator | 2026-01-05 02:25:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:45.006927 | orchestrator | 2026-01-05 02:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:48.055756 | orchestrator | 2026-01-05 02:25:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:48.056718 | orchestrator | 2026-01-05 02:25:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:48.056760 | orchestrator | 2026-01-05 02:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:51.108379 | orchestrator | 2026-01-05 02:25:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:51.109666 | orchestrator | 2026-01-05 02:25:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:51.109689 | orchestrator | 2026-01-05 02:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:54.157889 | orchestrator | 2026-01-05 02:25:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:54.160191 | orchestrator | 2026-01-05 02:25:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:54.160291 | orchestrator | 2026-01-05 02:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:57.213630 | orchestrator | 2026-01-05 02:25:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:25:57.214355 | orchestrator | 2026-01-05 02:25:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:25:57.214613 | orchestrator | 2026-01-05 02:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:00.263611 | orchestrator | 2026-01-05 02:26:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:00.265732 | orchestrator | 2026-01-05 02:26:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:00.265787 | orchestrator | 2026-01-05 02:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:03.320617 | orchestrator | 2026-01-05 02:26:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:03.323059 | orchestrator | 2026-01-05 02:26:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:03.323188 | orchestrator | 2026-01-05 02:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:06.375678 | orchestrator | 2026-01-05 02:26:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:06.379301 | orchestrator | 2026-01-05 02:26:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:06.379385 | orchestrator | 2026-01-05 02:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:09.432354 | orchestrator | 2026-01-05 02:26:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:09.434576 | orchestrator | 2026-01-05 02:26:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:09.434701 | orchestrator | 2026-01-05 02:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:12.485346 | orchestrator | 2026-01-05 02:26:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:12.487765 | orchestrator | 2026-01-05 02:26:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:12.487824 | orchestrator | 2026-01-05 02:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:15.541275 | orchestrator | 2026-01-05 02:26:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:15.544774 | orchestrator | 2026-01-05 02:26:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:15.544831 | orchestrator | 2026-01-05 02:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:18.594523 | orchestrator | 2026-01-05 02:26:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:18.597834 | orchestrator | 2026-01-05 02:26:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:18.597910 | orchestrator | 2026-01-05 02:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:21.648062 | orchestrator | 2026-01-05 02:26:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:21.649483 | orchestrator | 2026-01-05 02:26:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:21.649555 | orchestrator | 2026-01-05 02:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:24.697657 | orchestrator | 2026-01-05 02:26:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:24.699343 | orchestrator | 2026-01-05 02:26:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:24.699394 | orchestrator | 2026-01-05 02:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:27.756257 | orchestrator | 2026-01-05 02:26:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:27.760000 | orchestrator | 2026-01-05 02:26:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:27.760141 | orchestrator | 2026-01-05 02:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:30.811986 | orchestrator | 2026-01-05 02:26:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:30.814139 | orchestrator | 2026-01-05 02:26:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:30.814212 | orchestrator | 2026-01-05 02:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:33.863133 | orchestrator | 2026-01-05 02:26:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:33.864661 | orchestrator | 2026-01-05 02:26:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:33.864845 | orchestrator | 2026-01-05 02:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:36.905413 | orchestrator | 2026-01-05 02:26:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:36.905820 | orchestrator | 2026-01-05 02:26:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:36.905883 | orchestrator | 2026-01-05 02:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:39.961585 | orchestrator | 2026-01-05 02:26:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:39.965509 | orchestrator | 2026-01-05 02:26:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:39.965664 | orchestrator | 2026-01-05 02:26:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:43.016623 | orchestrator | 2026-01-05 02:26:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:43.017565 | orchestrator | 2026-01-05 02:26:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:43.017641 | orchestrator | 2026-01-05 02:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:46.070856 | orchestrator | 2026-01-05 02:26:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:46.071598 | orchestrator | 2026-01-05 02:26:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:46.071700 | orchestrator | 2026-01-05 02:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:49.112379 | orchestrator | 2026-01-05 02:26:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:49.114905 | orchestrator | 2026-01-05 02:26:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:49.114973 | orchestrator | 2026-01-05 02:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:52.164820 | orchestrator | 2026-01-05 02:26:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:52.165926 | orchestrator | 2026-01-05 02:26:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:52.165964 | orchestrator | 2026-01-05 02:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:55.214108 | orchestrator | 2026-01-05 02:26:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:55.214216 | orchestrator | 2026-01-05 02:26:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:55.214224 | orchestrator | 2026-01-05 02:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:58.262103 | orchestrator | 2026-01-05 02:26:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:26:58.264265 | orchestrator | 2026-01-05 02:26:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:26:58.264316 | orchestrator | 2026-01-05 02:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:01.312215 | orchestrator | 2026-01-05 02:27:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:01.313915 | orchestrator | 2026-01-05 02:27:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:01.314233 | orchestrator | 2026-01-05 02:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:04.362380 | orchestrator | 2026-01-05 02:27:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:04.364927 | orchestrator | 2026-01-05 02:27:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:04.365008 | orchestrator | 2026-01-05 02:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:07.413212 | orchestrator | 2026-01-05 02:27:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:07.415172 | orchestrator | 2026-01-05 02:27:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:07.415301 | orchestrator | 2026-01-05 02:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:10.463696 | orchestrator | 2026-01-05 02:27:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:10.466335 | orchestrator | 2026-01-05 02:27:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:10.466406 | orchestrator | 2026-01-05 02:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:13.522501 | orchestrator | 2026-01-05 02:27:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:13.524171 | orchestrator | 2026-01-05 02:27:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:13.524246 | orchestrator | 2026-01-05 02:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:16.569083 | orchestrator | 2026-01-05 02:27:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:16.571676 | orchestrator | 2026-01-05 02:27:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:16.571758 | orchestrator | 2026-01-05 02:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:19.625392 | orchestrator | 2026-01-05 02:27:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:19.626954 | orchestrator | 2026-01-05 02:27:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:19.627019 | orchestrator | 2026-01-05 02:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:22.673662 | orchestrator | 2026-01-05 02:27:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:22.673751 | orchestrator | 2026-01-05 02:27:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:22.673761 | orchestrator | 2026-01-05 02:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:25.725533 | orchestrator | 2026-01-05 02:27:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:25.726814 | orchestrator | 2026-01-05 02:27:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:25.726899 | orchestrator | 2026-01-05 02:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:28.781184 | orchestrator | 2026-01-05 02:27:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:28.783364 | orchestrator | 2026-01-05 02:27:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:28.783415 | orchestrator | 2026-01-05 02:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:31.832377 | orchestrator | 2026-01-05 02:27:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:31.835415 | orchestrator | 2026-01-05 02:27:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:31.835482 | orchestrator | 2026-01-05 02:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:34.884989 | orchestrator | 2026-01-05 02:27:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:34.886618 | orchestrator | 2026-01-05 02:27:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:34.886826 | orchestrator | 2026-01-05 02:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:37.938454 | orchestrator | 2026-01-05 02:27:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:37.939581 | orchestrator | 2026-01-05 02:27:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:37.939630 | orchestrator | 2026-01-05 02:27:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:40.988968 | orchestrator | 2026-01-05 02:27:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:40.991041 | orchestrator | 2026-01-05 02:27:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:40.991162 | orchestrator | 2026-01-05 02:27:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:44.030197 | orchestrator | 2026-01-05 02:27:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:44.032197 | orchestrator | 2026-01-05 02:27:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:44.032275 | orchestrator | 2026-01-05 02:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:47.078682 | orchestrator | 2026-01-05 02:27:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:47.080037 | orchestrator | 2026-01-05 02:27:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:47.080089 | orchestrator | 2026-01-05 02:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:50.124544 | orchestrator | 2026-01-05 02:27:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:50.126438 | orchestrator | 2026-01-05 02:27:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:50.126493 | orchestrator | 2026-01-05 02:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:53.175352 | orchestrator | 2026-01-05 02:27:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:53.177660 | orchestrator | 2026-01-05 02:27:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:53.177738 | orchestrator | 2026-01-05 02:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:56.218872 | orchestrator | 2026-01-05 02:27:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:56.222316 | orchestrator | 2026-01-05 02:27:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:56.222367 | orchestrator | 2026-01-05 02:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:59.274982 | orchestrator | 2026-01-05 02:27:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:27:59.277541 | orchestrator | 2026-01-05 02:27:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:27:59.277658 | orchestrator | 2026-01-05 02:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:02.330810 | orchestrator | 2026-01-05 02:28:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:02.332027 | orchestrator | 2026-01-05 02:28:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:02.332241 | orchestrator | 2026-01-05 02:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:05.378527 | orchestrator | 2026-01-05 02:28:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:05.378923 | orchestrator | 2026-01-05 02:28:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:05.378984 | orchestrator | 2026-01-05 02:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:08.426477 | orchestrator | 2026-01-05 02:28:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:08.428092 | orchestrator | 2026-01-05 02:28:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:08.428274 | orchestrator | 2026-01-05 02:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:11.475461 | orchestrator | 2026-01-05 02:28:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:11.478077 | orchestrator | 2026-01-05 02:28:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:11.478163 | orchestrator | 2026-01-05 02:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:14.533531 | orchestrator | 2026-01-05 02:28:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:14.534695 | orchestrator | 2026-01-05 02:28:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:14.534763 | orchestrator | 2026-01-05 02:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:17.588527 | orchestrator | 2026-01-05 02:28:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:17.589101 | orchestrator | 2026-01-05 02:28:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:17.589410 | orchestrator | 2026-01-05 02:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:20.635631 | orchestrator | 2026-01-05 02:28:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:20.637787 | orchestrator | 2026-01-05 02:28:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:20.637847 | orchestrator | 2026-01-05 02:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:23.688639 | orchestrator | 2026-01-05 02:28:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:23.692698 | orchestrator | 2026-01-05 02:28:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:23.692765 | orchestrator | 2026-01-05 02:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:26.744314 | orchestrator | 2026-01-05 02:28:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:26.746339 | orchestrator | 2026-01-05 02:28:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:26.746413 | orchestrator | 2026-01-05 02:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:29.803480 | orchestrator | 2026-01-05 02:28:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:29.805220 | orchestrator | 2026-01-05 02:28:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:29.805282 | orchestrator | 2026-01-05 02:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:32.858538 | orchestrator | 2026-01-05 02:28:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:32.860585 | orchestrator | 2026-01-05 02:28:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:32.860680 | orchestrator | 2026-01-05 02:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:35.907517 | orchestrator | 2026-01-05 02:28:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:35.908583 | orchestrator | 2026-01-05 02:28:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:35.908714 | orchestrator | 2026-01-05 02:28:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:38.953685 | orchestrator | 2026-01-05 02:28:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:38.954247 | orchestrator | 2026-01-05 02:28:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:38.954320 | orchestrator | 2026-01-05 02:28:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:42.002602 | orchestrator | 2026-01-05 02:28:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:42.005633 | orchestrator | 2026-01-05 02:28:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:42.005737 | orchestrator | 2026-01-05 02:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:45.057999 | orchestrator | 2026-01-05 02:28:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:45.059417 | orchestrator | 2026-01-05 02:28:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:45.059455 | orchestrator | 2026-01-05 02:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:48.122084 | orchestrator | 2026-01-05 02:28:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:48.125321 | orchestrator | 2026-01-05 02:28:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:48.125413 | orchestrator | 2026-01-05 02:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:51.190898 | orchestrator | 2026-01-05 02:28:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:51.193011 | orchestrator | 2026-01-05 02:28:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:51.193082 | orchestrator | 2026-01-05 02:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:54.242937 | orchestrator | 2026-01-05 02:28:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:54.244145 | orchestrator | 2026-01-05 02:28:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:54.244344 | orchestrator | 2026-01-05 02:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:57.293729 | orchestrator | 2026-01-05 02:28:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:28:57.295551 | orchestrator | 2026-01-05 02:28:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:28:57.295643 | orchestrator | 2026-01-05 02:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:00.346422 | orchestrator | 2026-01-05 02:29:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:00.348679 | orchestrator | 2026-01-05 02:29:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:00.348759 | orchestrator | 2026-01-05 02:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:03.401401 | orchestrator | 2026-01-05 02:29:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:03.403435 | orchestrator | 2026-01-05 02:29:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:03.403489 | orchestrator | 2026-01-05 02:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:06.464026 | orchestrator | 2026-01-05 02:29:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:06.466279 | orchestrator | 2026-01-05 02:29:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:06.466360 | orchestrator | 2026-01-05 02:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:09.517168 | orchestrator | 2026-01-05 02:29:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:09.518425 | orchestrator | 2026-01-05 02:29:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:09.518440 | orchestrator | 2026-01-05 02:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:12.563259 | orchestrator | 2026-01-05 02:29:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:12.564556 | orchestrator | 2026-01-05 02:29:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:12.564597 | orchestrator | 2026-01-05 02:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:15.611771 | orchestrator | 2026-01-05 02:29:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:15.614720 | orchestrator | 2026-01-05 02:29:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:15.614777 | orchestrator | 2026-01-05 02:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:18.665718 | orchestrator | 2026-01-05 02:29:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:18.667045 | orchestrator | 2026-01-05 02:29:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:18.667275 | orchestrator | 2026-01-05 02:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:21.710447 | orchestrator | 2026-01-05 02:29:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:21.711289 | orchestrator | 2026-01-05 02:29:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:21.711356 | orchestrator | 2026-01-05 02:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:24.757274 | orchestrator | 2026-01-05 02:29:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:24.758804 | orchestrator | 2026-01-05 02:29:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:24.758824 | orchestrator | 2026-01-05 02:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:27.815366 | orchestrator | 2026-01-05 02:29:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:27.816769 | orchestrator | 2026-01-05 02:29:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:27.816818 | orchestrator | 2026-01-05 02:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:30.860314 | orchestrator | 2026-01-05 02:29:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:30.861805 | orchestrator | 2026-01-05 02:29:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:30.861839 | orchestrator | 2026-01-05 02:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:33.917419 | orchestrator | 2026-01-05 02:29:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:33.919065 | orchestrator | 2026-01-05 02:29:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:33.919249 | orchestrator | 2026-01-05 02:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:36.968468 | orchestrator | 2026-01-05 02:29:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:36.969807 | orchestrator | 2026-01-05 02:29:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:36.969857 | orchestrator | 2026-01-05 02:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:40.019811 | orchestrator | 2026-01-05 02:29:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:40.020986 | orchestrator | 2026-01-05 02:29:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:40.021020 | orchestrator | 2026-01-05 02:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:43.067972 | orchestrator | 2026-01-05 02:29:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:43.070418 | orchestrator | 2026-01-05 02:29:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:43.070500 | orchestrator | 2026-01-05 02:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:46.119213 | orchestrator | 2026-01-05 02:29:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:46.120301 | orchestrator | 2026-01-05 02:29:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:46.120381 | orchestrator | 2026-01-05 02:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:49.166005 | orchestrator | 2026-01-05 02:29:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:49.167584 | orchestrator | 2026-01-05 02:29:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:49.167637 | orchestrator | 2026-01-05 02:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:52.211312 | orchestrator | 2026-01-05 02:29:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:52.213477 | orchestrator | 2026-01-05 02:29:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:52.213555 | orchestrator | 2026-01-05 02:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:55.256002 | orchestrator | 2026-01-05 02:29:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:55.258819 | orchestrator | 2026-01-05 02:29:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:55.259583 | orchestrator | 2026-01-05 02:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:58.301639 | orchestrator | 2026-01-05 02:29:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:29:58.303536 | orchestrator | 2026-01-05 02:29:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:29:58.303861 | orchestrator | 2026-01-05 02:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:01.355932 | orchestrator | 2026-01-05 02:30:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:01.358313 | orchestrator | 2026-01-05 02:30:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:01.358368 | orchestrator | 2026-01-05 02:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:04.409957 | orchestrator | 2026-01-05 02:30:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:04.413240 | orchestrator | 2026-01-05 02:30:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:04.413464 | orchestrator | 2026-01-05 02:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:07.469008 | orchestrator | 2026-01-05 02:30:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:07.471528 | orchestrator | 2026-01-05 02:30:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:07.471602 | orchestrator | 2026-01-05 02:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:10.516711 | orchestrator | 2026-01-05 02:30:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:10.518663 | orchestrator | 2026-01-05 02:30:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:10.518737 | orchestrator | 2026-01-05 02:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:13.563862 | orchestrator | 2026-01-05 02:30:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:13.567701 | orchestrator | 2026-01-05 02:30:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:13.567862 | orchestrator | 2026-01-05 02:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:16.618513 | orchestrator | 2026-01-05 02:30:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:16.620360 | orchestrator | 2026-01-05 02:30:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:16.620437 | orchestrator | 2026-01-05 02:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:19.673253 | orchestrator | 2026-01-05 02:30:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:19.675079 | orchestrator | 2026-01-05 02:30:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:19.675131 | orchestrator | 2026-01-05 02:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:22.716511 | orchestrator | 2026-01-05 02:30:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:22.717987 | orchestrator | 2026-01-05 02:30:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:22.718105 | orchestrator | 2026-01-05 02:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:25.766563 | orchestrator | 2026-01-05 02:30:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:25.768438 | orchestrator | 2026-01-05 02:30:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:25.768527 | orchestrator | 2026-01-05 02:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:28.817347 | orchestrator | 2026-01-05 02:30:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:28.818806 | orchestrator | 2026-01-05 02:30:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:28.818932 | orchestrator | 2026-01-05 02:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:31.870921 | orchestrator | 2026-01-05 02:30:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:31.872790 | orchestrator | 2026-01-05 02:30:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:31.873071 | orchestrator | 2026-01-05 02:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:34.923248 | orchestrator | 2026-01-05 02:30:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:34.925590 | orchestrator | 2026-01-05 02:30:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:34.925629 | orchestrator | 2026-01-05 02:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:37.991975 | orchestrator | 2026-01-05 02:30:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:37.993714 | orchestrator | 2026-01-05 02:30:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:37.993824 | orchestrator | 2026-01-05 02:30:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:41.053628 | orchestrator | 2026-01-05 02:30:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:41.054298 | orchestrator | 2026-01-05 02:30:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:41.054496 | orchestrator | 2026-01-05 02:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:44.099590 | orchestrator | 2026-01-05 02:30:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:44.101413 | orchestrator | 2026-01-05 02:30:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:44.101466 | orchestrator | 2026-01-05 02:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:47.145509 | orchestrator | 2026-01-05 02:30:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:47.149025 | orchestrator | 2026-01-05 02:30:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:47.149094 | orchestrator | 2026-01-05 02:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:50.190488 | orchestrator | 2026-01-05 02:30:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:50.191349 | orchestrator | 2026-01-05 02:30:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:50.191404 | orchestrator | 2026-01-05 02:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:53.261862 | orchestrator | 2026-01-05 02:30:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:53.262052 | orchestrator | 2026-01-05 02:30:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:53.262067 | orchestrator | 2026-01-05 02:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:56.337502 | orchestrator | 2026-01-05 02:30:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:56.339183 | orchestrator | 2026-01-05 02:30:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:56.339528 | orchestrator | 2026-01-05 02:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:59.384893 | orchestrator | 2026-01-05 02:30:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:30:59.385805 | orchestrator | 2026-01-05 02:30:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:30:59.385850 | orchestrator | 2026-01-05 02:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:02.445555 | orchestrator | 2026-01-05 02:31:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:02.448269 | orchestrator | 2026-01-05 02:31:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:02.448361 | orchestrator | 2026-01-05 02:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:05.513465 | orchestrator | 2026-01-05 02:31:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:05.514285 | orchestrator | 2026-01-05 02:31:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:05.514336 | orchestrator | 2026-01-05 02:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:08.562710 | orchestrator | 2026-01-05 02:31:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:08.564188 | orchestrator | 2026-01-05 02:31:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:08.564226 | orchestrator | 2026-01-05 02:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:11.616116 | orchestrator | 2026-01-05 02:31:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:11.620775 | orchestrator | 2026-01-05 02:31:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:11.620867 | orchestrator | 2026-01-05 02:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:14.670183 | orchestrator | 2026-01-05 02:31:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:14.672432 | orchestrator | 2026-01-05 02:31:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:14.672511 | orchestrator | 2026-01-05 02:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:17.722347 | orchestrator | 2026-01-05 02:31:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:17.724428 | orchestrator | 2026-01-05 02:31:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:17.724680 | orchestrator | 2026-01-05 02:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:20.776193 | orchestrator | 2026-01-05 02:31:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:20.777768 | orchestrator | 2026-01-05 02:31:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:20.777865 | orchestrator | 2026-01-05 02:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:23.826221 | orchestrator | 2026-01-05 02:31:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:23.832455 | orchestrator | 2026-01-05 02:31:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:23.832544 | orchestrator | 2026-01-05 02:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:26.879894 | orchestrator | 2026-01-05 02:31:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:26.881653 | orchestrator | 2026-01-05 02:31:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:26.881713 | orchestrator | 2026-01-05 02:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:29.939679 | orchestrator | 2026-01-05 02:31:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:29.941524 | orchestrator | 2026-01-05 02:31:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:29.941579 | orchestrator | 2026-01-05 02:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:32.993955 | orchestrator | 2026-01-05 02:31:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:32.995667 | orchestrator | 2026-01-05 02:31:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:32.995722 | orchestrator | 2026-01-05 02:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:36.049731 | orchestrator | 2026-01-05 02:31:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:36.053438 | orchestrator | 2026-01-05 02:31:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:36.053526 | orchestrator | 2026-01-05 02:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:39.103301 | orchestrator | 2026-01-05 02:31:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:39.105376 | orchestrator | 2026-01-05 02:31:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:39.105440 | orchestrator | 2026-01-05 02:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:42.153147 | orchestrator | 2026-01-05 02:31:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:42.156032 | orchestrator | 2026-01-05 02:31:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:42.156112 | orchestrator | 2026-01-05 02:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:45.203138 | orchestrator | 2026-01-05 02:31:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:45.205139 | orchestrator | 2026-01-05 02:31:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:45.205235 | orchestrator | 2026-01-05 02:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:48.255335 | orchestrator | 2026-01-05 02:31:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:48.257552 | orchestrator | 2026-01-05 02:31:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:48.257671 | orchestrator | 2026-01-05 02:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:51.309329 | orchestrator | 2026-01-05 02:31:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:51.310724 | orchestrator | 2026-01-05 02:31:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:51.310841 | orchestrator | 2026-01-05 02:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:54.365739 | orchestrator | 2026-01-05 02:31:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:54.367594 | orchestrator | 2026-01-05 02:31:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:54.367717 | orchestrator | 2026-01-05 02:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:57.412441 | orchestrator | 2026-01-05 02:31:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:31:57.413147 | orchestrator | 2026-01-05 02:31:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:31:57.413186 | orchestrator | 2026-01-05 02:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:00.461335 | orchestrator | 2026-01-05 02:32:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:00.462072 | orchestrator | 2026-01-05 02:32:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:00.462160 | orchestrator | 2026-01-05 02:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:03.504902 | orchestrator | 2026-01-05 02:32:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:03.505231 | orchestrator | 2026-01-05 02:32:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:03.505249 | orchestrator | 2026-01-05 02:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:06.552105 | orchestrator | 2026-01-05 02:32:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:06.553547 | orchestrator | 2026-01-05 02:32:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:06.553754 | orchestrator | 2026-01-05 02:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:09.596421 | orchestrator | 2026-01-05 02:32:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:09.598274 | orchestrator | 2026-01-05 02:32:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:09.598340 | orchestrator | 2026-01-05 02:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:12.645741 | orchestrator | 2026-01-05 02:32:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:12.647619 | orchestrator | 2026-01-05 02:32:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:12.647696 | orchestrator | 2026-01-05 02:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:15.701415 | orchestrator | 2026-01-05 02:32:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:15.702752 | orchestrator | 2026-01-05 02:32:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:15.703352 | orchestrator | 2026-01-05 02:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:18.753028 | orchestrator | 2026-01-05 02:32:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:18.756085 | orchestrator | 2026-01-05 02:32:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:18.756255 | orchestrator | 2026-01-05 02:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:21.803921 | orchestrator | 2026-01-05 02:32:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:21.806514 | orchestrator | 2026-01-05 02:32:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:21.806642 | orchestrator | 2026-01-05 02:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:24.844017 | orchestrator | 2026-01-05 02:32:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:24.846550 | orchestrator | 2026-01-05 02:32:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:24.846675 | orchestrator | 2026-01-05 02:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:27.894385 | orchestrator | 2026-01-05 02:32:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:27.895931 | orchestrator | 2026-01-05 02:32:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:27.896014 | orchestrator | 2026-01-05 02:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:30.943117 | orchestrator | 2026-01-05 02:32:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:30.945319 | orchestrator | 2026-01-05 02:32:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:30.945541 | orchestrator | 2026-01-05 02:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:33.999108 | orchestrator | 2026-01-05 02:32:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:34.000468 | orchestrator | 2026-01-05 02:32:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:34.000528 | orchestrator | 2026-01-05 02:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:37.049144 | orchestrator | 2026-01-05 02:32:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:37.050835 | orchestrator | 2026-01-05 02:32:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:37.050910 | orchestrator | 2026-01-05 02:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:40.097991 | orchestrator | 2026-01-05 02:32:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:40.099926 | orchestrator | 2026-01-05 02:32:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:40.100099 | orchestrator | 2026-01-05 02:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:43.146500 | orchestrator | 2026-01-05 02:32:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:43.147521 | orchestrator | 2026-01-05 02:32:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:43.147555 | orchestrator | 2026-01-05 02:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:46.196364 | orchestrator | 2026-01-05 02:32:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:46.199334 | orchestrator | 2026-01-05 02:32:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:46.199419 | orchestrator | 2026-01-05 02:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:49.241292 | orchestrator | 2026-01-05 02:32:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:49.242915 | orchestrator | 2026-01-05 02:32:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:49.242953 | orchestrator | 2026-01-05 02:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:52.300057 | orchestrator | 2026-01-05 02:32:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:52.302412 | orchestrator | 2026-01-05 02:32:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:52.302471 | orchestrator | 2026-01-05 02:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:55.351184 | orchestrator | 2026-01-05 02:32:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:55.353474 | orchestrator | 2026-01-05 02:32:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:55.353532 | orchestrator | 2026-01-05 02:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:58.402310 | orchestrator | 2026-01-05 02:32:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:32:58.404191 | orchestrator | 2026-01-05 02:32:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:32:58.404271 | orchestrator | 2026-01-05 02:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:01.453122 | orchestrator | 2026-01-05 02:33:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:01.455380 | orchestrator | 2026-01-05 02:33:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:01.455438 | orchestrator | 2026-01-05 02:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:04.499664 | orchestrator | 2026-01-05 02:33:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:04.501467 | orchestrator | 2026-01-05 02:33:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:04.501530 | orchestrator | 2026-01-05 02:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:07.548372 | orchestrator | 2026-01-05 02:33:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:07.550293 | orchestrator | 2026-01-05 02:33:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:07.550411 | orchestrator | 2026-01-05 02:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:10.601517 | orchestrator | 2026-01-05 02:33:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:10.603458 | orchestrator | 2026-01-05 02:33:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:10.603501 | orchestrator | 2026-01-05 02:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:13.647002 | orchestrator | 2026-01-05 02:33:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:13.648354 | orchestrator | 2026-01-05 02:33:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:13.648511 | orchestrator | 2026-01-05 02:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:16.698421 | orchestrator | 2026-01-05 02:33:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:16.698868 | orchestrator | 2026-01-05 02:33:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:16.699039 | orchestrator | 2026-01-05 02:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:19.751674 | orchestrator | 2026-01-05 02:33:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:19.753335 | orchestrator | 2026-01-05 02:33:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:19.753410 | orchestrator | 2026-01-05 02:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:22.796677 | orchestrator | 2026-01-05 02:33:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:22.798842 | orchestrator | 2026-01-05 02:33:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:22.798926 | orchestrator | 2026-01-05 02:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:25.844262 | orchestrator | 2026-01-05 02:33:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:25.847371 | orchestrator | 2026-01-05 02:33:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:25.847505 | orchestrator | 2026-01-05 02:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:28.895413 | orchestrator | 2026-01-05 02:33:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:28.896703 | orchestrator | 2026-01-05 02:33:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:28.897006 | orchestrator | 2026-01-05 02:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:31.944964 | orchestrator | 2026-01-05 02:33:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:31.947100 | orchestrator | 2026-01-05 02:33:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:31.947189 | orchestrator | 2026-01-05 02:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:34.992302 | orchestrator | 2026-01-05 02:33:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:34.994361 | orchestrator | 2026-01-05 02:33:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:34.994447 | orchestrator | 2026-01-05 02:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:38.039757 | orchestrator | 2026-01-05 02:33:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:38.040094 | orchestrator | 2026-01-05 02:33:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:38.041763 | orchestrator | 2026-01-05 02:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:41.093705 | orchestrator | 2026-01-05 02:33:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:41.094813 | orchestrator | 2026-01-05 02:33:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:41.094875 | orchestrator | 2026-01-05 02:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:44.140936 | orchestrator | 2026-01-05 02:33:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:44.142748 | orchestrator | 2026-01-05 02:33:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:44.142801 | orchestrator | 2026-01-05 02:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:47.182817 | orchestrator | 2026-01-05 02:33:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:47.186104 | orchestrator | 2026-01-05 02:33:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:47.186168 | orchestrator | 2026-01-05 02:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:50.229400 | orchestrator | 2026-01-05 02:33:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:50.231064 | orchestrator | 2026-01-05 02:33:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:50.231113 | orchestrator | 2026-01-05 02:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:53.278628 | orchestrator | 2026-01-05 02:33:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:53.279954 | orchestrator | 2026-01-05 02:33:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:53.280004 | orchestrator | 2026-01-05 02:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:56.330237 | orchestrator | 2026-01-05 02:33:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:56.332147 | orchestrator | 2026-01-05 02:33:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:56.332213 | orchestrator | 2026-01-05 02:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:59.374965 | orchestrator | 2026-01-05 02:33:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:33:59.377541 | orchestrator | 2026-01-05 02:33:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:33:59.377793 | orchestrator | 2026-01-05 02:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:02.424523 | orchestrator | 2026-01-05 02:34:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:02.426895 | orchestrator | 2026-01-05 02:34:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:02.426933 | orchestrator | 2026-01-05 02:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:05.476349 | orchestrator | 2026-01-05 02:34:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:05.477774 | orchestrator | 2026-01-05 02:34:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:05.477832 | orchestrator | 2026-01-05 02:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:08.525848 | orchestrator | 2026-01-05 02:34:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:08.528921 | orchestrator | 2026-01-05 02:34:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:08.528990 | orchestrator | 2026-01-05 02:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:11.581789 | orchestrator | 2026-01-05 02:34:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:11.587080 | orchestrator | 2026-01-05 02:34:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:11.587578 | orchestrator | 2026-01-05 02:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:14.639860 | orchestrator | 2026-01-05 02:34:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:14.642212 | orchestrator | 2026-01-05 02:34:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:14.642307 | orchestrator | 2026-01-05 02:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:17.696686 | orchestrator | 2026-01-05 02:34:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:17.700338 | orchestrator | 2026-01-05 02:34:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:17.700406 | orchestrator | 2026-01-05 02:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:20.753408 | orchestrator | 2026-01-05 02:34:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:20.754532 | orchestrator | 2026-01-05 02:34:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:20.754579 | orchestrator | 2026-01-05 02:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:23.804308 | orchestrator | 2026-01-05 02:34:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:23.807852 | orchestrator | 2026-01-05 02:34:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:23.807940 | orchestrator | 2026-01-05 02:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:26.855561 | orchestrator | 2026-01-05 02:34:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:26.857417 | orchestrator | 2026-01-05 02:34:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:26.857467 | orchestrator | 2026-01-05 02:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:29.907824 | orchestrator | 2026-01-05 02:34:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:29.909399 | orchestrator | 2026-01-05 02:34:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:29.909481 | orchestrator | 2026-01-05 02:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:32.958795 | orchestrator | 2026-01-05 02:34:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:32.960316 | orchestrator | 2026-01-05 02:34:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:32.960391 | orchestrator | 2026-01-05 02:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:36.013171 | orchestrator | 2026-01-05 02:34:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:36.015474 | orchestrator | 2026-01-05 02:34:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:36.015789 | orchestrator | 2026-01-05 02:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:39.075647 | orchestrator | 2026-01-05 02:34:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:39.077878 | orchestrator | 2026-01-05 02:34:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:39.077964 | orchestrator | 2026-01-05 02:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:42.125698 | orchestrator | 2026-01-05 02:34:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:42.127892 | orchestrator | 2026-01-05 02:34:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:42.128074 | orchestrator | 2026-01-05 02:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:45.175810 | orchestrator | 2026-01-05 02:34:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:45.179208 | orchestrator | 2026-01-05 02:34:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:45.179286 | orchestrator | 2026-01-05 02:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:48.236398 | orchestrator | 2026-01-05 02:34:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:48.239555 | orchestrator | 2026-01-05 02:34:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:48.239621 | orchestrator | 2026-01-05 02:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:51.292159 | orchestrator | 2026-01-05 02:34:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:51.293417 | orchestrator | 2026-01-05 02:34:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:51.293571 | orchestrator | 2026-01-05 02:34:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:54.348140 | orchestrator | 2026-01-05 02:34:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:54.349693 | orchestrator | 2026-01-05 02:34:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:54.349752 | orchestrator | 2026-01-05 02:34:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:57.404914 | orchestrator | 2026-01-05 02:34:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:34:57.406892 | orchestrator | 2026-01-05 02:34:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:34:57.406956 | orchestrator | 2026-01-05 02:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:00.454441 | orchestrator | 2026-01-05 02:35:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:00.455451 | orchestrator | 2026-01-05 02:35:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:00.455542 | orchestrator | 2026-01-05 02:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:03.501935 | orchestrator | 2026-01-05 02:35:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:03.504913 | orchestrator | 2026-01-05 02:35:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:03.504992 | orchestrator | 2026-01-05 02:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:06.561214 | orchestrator | 2026-01-05 02:35:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:06.561682 | orchestrator | 2026-01-05 02:35:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:06.561848 | orchestrator | 2026-01-05 02:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:09.616042 | orchestrator | 2026-01-05 02:35:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:09.617216 | orchestrator | 2026-01-05 02:35:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:09.617271 | orchestrator | 2026-01-05 02:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:12.668970 | orchestrator | 2026-01-05 02:35:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:12.672490 | orchestrator | 2026-01-05 02:35:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:12.672585 | orchestrator | 2026-01-05 02:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:15.719373 | orchestrator | 2026-01-05 02:35:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:15.722964 | orchestrator | 2026-01-05 02:35:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:15.723062 | orchestrator | 2026-01-05 02:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:18.769922 | orchestrator | 2026-01-05 02:35:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:18.771471 | orchestrator | 2026-01-05 02:35:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:18.771603 | orchestrator | 2026-01-05 02:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:21.821156 | orchestrator | 2026-01-05 02:35:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:21.823050 | orchestrator | 2026-01-05 02:35:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:21.823095 | orchestrator | 2026-01-05 02:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:24.872628 | orchestrator | 2026-01-05 02:35:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:24.874493 | orchestrator | 2026-01-05 02:35:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:24.874632 | orchestrator | 2026-01-05 02:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:27.934972 | orchestrator | 2026-01-05 02:35:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:27.937963 | orchestrator | 2026-01-05 02:35:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:27.938122 | orchestrator | 2026-01-05 02:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:30.987948 | orchestrator | 2026-01-05 02:35:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:30.990361 | orchestrator | 2026-01-05 02:35:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:30.990426 | orchestrator | 2026-01-05 02:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:34.036746 | orchestrator | 2026-01-05 02:35:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:34.038169 | orchestrator | 2026-01-05 02:35:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:34.038245 | orchestrator | 2026-01-05 02:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:37.081769 | orchestrator | 2026-01-05 02:35:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:37.083210 | orchestrator | 2026-01-05 02:35:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:37.083260 | orchestrator | 2026-01-05 02:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:40.122301 | orchestrator | 2026-01-05 02:35:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:40.123102 | orchestrator | 2026-01-05 02:35:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:40.123148 | orchestrator | 2026-01-05 02:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:43.169905 | orchestrator | 2026-01-05 02:35:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:43.171118 | orchestrator | 2026-01-05 02:35:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:43.171170 | orchestrator | 2026-01-05 02:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:46.219576 | orchestrator | 2026-01-05 02:35:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:46.221262 | orchestrator | 2026-01-05 02:35:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:46.221312 | orchestrator | 2026-01-05 02:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:49.271967 | orchestrator | 2026-01-05 02:35:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:49.275271 | orchestrator | 2026-01-05 02:35:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:49.275388 | orchestrator | 2026-01-05 02:35:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:52.321013 | orchestrator | 2026-01-05 02:35:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:52.324133 | orchestrator | 2026-01-05 02:35:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:52.324185 | orchestrator | 2026-01-05 02:35:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:55.369838 | orchestrator | 2026-01-05 02:35:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:55.373194 | orchestrator | 2026-01-05 02:35:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:55.373264 | orchestrator | 2026-01-05 02:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:58.423068 | orchestrator | 2026-01-05 02:35:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:35:58.425099 | orchestrator | 2026-01-05 02:35:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:35:58.425145 | orchestrator | 2026-01-05 02:35:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:01.475265 | orchestrator | 2026-01-05 02:36:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:01.478291 | orchestrator | 2026-01-05 02:36:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:01.478388 | orchestrator | 2026-01-05 02:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:04.521589 | orchestrator | 2026-01-05 02:36:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:04.523372 | orchestrator | 2026-01-05 02:36:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:04.523419 | orchestrator | 2026-01-05 02:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:07.577758 | orchestrator | 2026-01-05 02:36:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:07.579789 | orchestrator | 2026-01-05 02:36:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:07.579864 | orchestrator | 2026-01-05 02:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:10.630417 | orchestrator | 2026-01-05 02:36:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:10.633169 | orchestrator | 2026-01-05 02:36:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:10.633277 | orchestrator | 2026-01-05 02:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:13.680715 | orchestrator | 2026-01-05 02:36:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:13.681950 | orchestrator | 2026-01-05 02:36:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:13.681988 | orchestrator | 2026-01-05 02:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:16.729189 | orchestrator | 2026-01-05 02:36:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:16.730250 | orchestrator | 2026-01-05 02:36:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:16.730334 | orchestrator | 2026-01-05 02:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:19.774565 | orchestrator | 2026-01-05 02:36:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:19.775866 | orchestrator | 2026-01-05 02:36:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:19.775915 | orchestrator | 2026-01-05 02:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:22.821155 | orchestrator | 2026-01-05 02:36:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:22.823861 | orchestrator | 2026-01-05 02:36:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:22.824005 | orchestrator | 2026-01-05 02:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:25.872758 | orchestrator | 2026-01-05 02:36:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:25.875401 | orchestrator | 2026-01-05 02:36:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:25.875471 | orchestrator | 2026-01-05 02:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:28.932183 | orchestrator | 2026-01-05 02:36:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:28.934958 | orchestrator | 2026-01-05 02:36:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:28.935041 | orchestrator | 2026-01-05 02:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:31.988467 | orchestrator | 2026-01-05 02:36:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:31.990621 | orchestrator | 2026-01-05 02:36:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:31.990865 | orchestrator | 2026-01-05 02:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:35.052088 | orchestrator | 2026-01-05 02:36:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:35.054289 | orchestrator | 2026-01-05 02:36:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:35.054394 | orchestrator | 2026-01-05 02:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:38.099361 | orchestrator | 2026-01-05 02:36:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:38.100215 | orchestrator | 2026-01-05 02:36:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:38.100542 | orchestrator | 2026-01-05 02:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:41.149518 | orchestrator | 2026-01-05 02:36:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:41.151365 | orchestrator | 2026-01-05 02:36:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:41.151421 | orchestrator | 2026-01-05 02:36:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:44.202891 | orchestrator | 2026-01-05 02:36:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:44.205120 | orchestrator | 2026-01-05 02:36:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:44.205179 | orchestrator | 2026-01-05 02:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:47.256210 | orchestrator | 2026-01-05 02:36:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:47.259939 | orchestrator | 2026-01-05 02:36:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:47.260011 | orchestrator | 2026-01-05 02:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:50.307060 | orchestrator | 2026-01-05 02:36:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:50.310740 | orchestrator | 2026-01-05 02:36:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:50.310849 | orchestrator | 2026-01-05 02:36:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:53.360379 | orchestrator | 2026-01-05 02:36:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:53.361925 | orchestrator | 2026-01-05 02:36:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:53.362103 | orchestrator | 2026-01-05 02:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:56.418640 | orchestrator | 2026-01-05 02:36:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:56.419630 | orchestrator | 2026-01-05 02:36:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:56.419691 | orchestrator | 2026-01-05 02:36:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:59.477176 | orchestrator | 2026-01-05 02:36:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:36:59.478248 | orchestrator | 2026-01-05 02:36:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:36:59.478367 | orchestrator | 2026-01-05 02:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:02.529022 | orchestrator | 2026-01-05 02:37:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:02.530313 | orchestrator | 2026-01-05 02:37:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:02.530353 | orchestrator | 2026-01-05 02:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:05.584934 | orchestrator | 2026-01-05 02:37:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:05.588332 | orchestrator | 2026-01-05 02:37:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:05.588386 | orchestrator | 2026-01-05 02:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:08.645150 | orchestrator | 2026-01-05 02:37:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:08.647891 | orchestrator | 2026-01-05 02:37:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:08.647969 | orchestrator | 2026-01-05 02:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:11.701317 | orchestrator | 2026-01-05 02:37:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:11.702175 | orchestrator | 2026-01-05 02:37:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:11.702242 | orchestrator | 2026-01-05 02:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:14.751869 | orchestrator | 2026-01-05 02:37:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:14.753629 | orchestrator | 2026-01-05 02:37:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:14.753695 | orchestrator | 2026-01-05 02:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:17.802649 | orchestrator | 2026-01-05 02:37:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:17.804918 | orchestrator | 2026-01-05 02:37:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:17.804981 | orchestrator | 2026-01-05 02:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:20.859167 | orchestrator | 2026-01-05 02:37:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:20.862308 | orchestrator | 2026-01-05 02:37:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:20.862385 | orchestrator | 2026-01-05 02:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:23.909707 | orchestrator | 2026-01-05 02:37:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:23.911070 | orchestrator | 2026-01-05 02:37:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:23.911110 | orchestrator | 2026-01-05 02:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:26.973604 | orchestrator | 2026-01-05 02:37:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:26.975551 | orchestrator | 2026-01-05 02:37:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:26.975617 | orchestrator | 2026-01-05 02:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:30.028490 | orchestrator | 2026-01-05 02:37:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:30.029621 | orchestrator | 2026-01-05 02:37:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:30.029671 | orchestrator | 2026-01-05 02:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:33.081680 | orchestrator | 2026-01-05 02:37:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:33.082937 | orchestrator | 2026-01-05 02:37:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:33.082984 | orchestrator | 2026-01-05 02:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:36.124434 | orchestrator | 2026-01-05 02:37:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:36.127585 | orchestrator | 2026-01-05 02:37:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:36.127968 | orchestrator | 2026-01-05 02:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:39.180894 | orchestrator | 2026-01-05 02:37:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:39.182159 | orchestrator | 2026-01-05 02:37:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:39.182220 | orchestrator | 2026-01-05 02:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:42.234844 | orchestrator | 2026-01-05 02:37:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:42.236164 | orchestrator | 2026-01-05 02:37:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:42.236188 | orchestrator | 2026-01-05 02:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:45.288764 | orchestrator | 2026-01-05 02:37:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:45.291501 | orchestrator | 2026-01-05 02:37:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:45.291575 | orchestrator | 2026-01-05 02:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:48.341318 | orchestrator | 2026-01-05 02:37:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:48.341489 | orchestrator | 2026-01-05 02:37:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:48.341507 | orchestrator | 2026-01-05 02:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:51.388383 | orchestrator | 2026-01-05 02:37:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:51.389411 | orchestrator | 2026-01-05 02:37:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:51.389443 | orchestrator | 2026-01-05 02:37:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:54.428705 | orchestrator | 2026-01-05 02:37:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:54.431523 | orchestrator | 2026-01-05 02:37:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:54.431581 | orchestrator | 2026-01-05 02:37:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:57.475523 | orchestrator | 2026-01-05 02:37:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:37:57.477144 | orchestrator | 2026-01-05 02:37:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:37:57.477202 | orchestrator | 2026-01-05 02:37:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:00.518799 | orchestrator | 2026-01-05 02:38:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:00.519580 | orchestrator | 2026-01-05 02:38:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:00.519618 | orchestrator | 2026-01-05 02:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:03.578356 | orchestrator | 2026-01-05 02:38:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:03.580203 | orchestrator | 2026-01-05 02:38:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:03.580360 | orchestrator | 2026-01-05 02:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:06.658266 | orchestrator | 2026-01-05 02:38:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:06.660331 | orchestrator | 2026-01-05 02:38:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:06.660385 | orchestrator | 2026-01-05 02:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:09.711173 | orchestrator | 2026-01-05 02:38:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:09.713193 | orchestrator | 2026-01-05 02:38:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:09.713263 | orchestrator | 2026-01-05 02:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:12.771992 | orchestrator | 2026-01-05 02:38:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:12.774183 | orchestrator | 2026-01-05 02:38:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:12.774247 | orchestrator | 2026-01-05 02:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:15.827809 | orchestrator | 2026-01-05 02:38:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:15.829997 | orchestrator | 2026-01-05 02:38:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:15.830078 | orchestrator | 2026-01-05 02:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:18.878623 | orchestrator | 2026-01-05 02:38:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:18.880113 | orchestrator | 2026-01-05 02:38:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:18.880161 | orchestrator | 2026-01-05 02:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:21.937800 | orchestrator | 2026-01-05 02:38:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:21.940055 | orchestrator | 2026-01-05 02:38:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:21.940212 | orchestrator | 2026-01-05 02:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:24.981471 | orchestrator | 2026-01-05 02:38:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:24.983028 | orchestrator | 2026-01-05 02:38:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:24.983084 | orchestrator | 2026-01-05 02:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:28.035394 | orchestrator | 2026-01-05 02:38:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:28.037184 | orchestrator | 2026-01-05 02:38:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:28.039536 | orchestrator | 2026-01-05 02:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:31.083821 | orchestrator | 2026-01-05 02:38:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:31.086199 | orchestrator | 2026-01-05 02:38:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:31.086264 | orchestrator | 2026-01-05 02:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:34.130120 | orchestrator | 2026-01-05 02:38:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:34.132217 | orchestrator | 2026-01-05 02:38:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:34.132320 | orchestrator | 2026-01-05 02:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:37.181402 | orchestrator | 2026-01-05 02:38:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:37.183638 | orchestrator | 2026-01-05 02:38:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:37.183705 | orchestrator | 2026-01-05 02:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:40.231141 | orchestrator | 2026-01-05 02:38:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:40.233287 | orchestrator | 2026-01-05 02:38:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:40.233366 | orchestrator | 2026-01-05 02:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:43.277500 | orchestrator | 2026-01-05 02:38:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:43.278221 | orchestrator | 2026-01-05 02:38:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:43.278422 | orchestrator | 2026-01-05 02:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:46.326452 | orchestrator | 2026-01-05 02:38:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:46.328026 | orchestrator | 2026-01-05 02:38:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:46.328213 | orchestrator | 2026-01-05 02:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:49.382130 | orchestrator | 2026-01-05 02:38:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:49.384467 | orchestrator | 2026-01-05 02:38:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:49.384555 | orchestrator | 2026-01-05 02:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:52.431364 | orchestrator | 2026-01-05 02:38:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:52.433519 | orchestrator | 2026-01-05 02:38:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:52.433604 | orchestrator | 2026-01-05 02:38:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:55.480622 | orchestrator | 2026-01-05 02:38:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:55.482801 | orchestrator | 2026-01-05 02:38:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:55.483057 | orchestrator | 2026-01-05 02:38:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:58.528579 | orchestrator | 2026-01-05 02:38:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:38:58.530288 | orchestrator | 2026-01-05 02:38:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:38:58.530341 | orchestrator | 2026-01-05 02:38:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:01.576583 | orchestrator | 2026-01-05 02:39:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:01.578542 | orchestrator | 2026-01-05 02:39:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:01.578600 | orchestrator | 2026-01-05 02:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:04.630744 | orchestrator | 2026-01-05 02:39:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:04.633103 | orchestrator | 2026-01-05 02:39:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:04.633211 | orchestrator | 2026-01-05 02:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:07.682281 | orchestrator | 2026-01-05 02:39:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:07.684355 | orchestrator | 2026-01-05 02:39:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:07.684421 | orchestrator | 2026-01-05 02:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:10.737745 | orchestrator | 2026-01-05 02:39:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:10.740799 | orchestrator | 2026-01-05 02:39:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:10.740874 | orchestrator | 2026-01-05 02:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:13.797276 | orchestrator | 2026-01-05 02:39:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:13.802455 | orchestrator | 2026-01-05 02:39:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:13.802534 | orchestrator | 2026-01-05 02:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:16.854365 | orchestrator | 2026-01-05 02:39:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:16.856605 | orchestrator | 2026-01-05 02:39:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:16.856676 | orchestrator | 2026-01-05 02:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:19.904521 | orchestrator | 2026-01-05 02:39:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:19.907970 | orchestrator | 2026-01-05 02:39:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:19.908060 | orchestrator | 2026-01-05 02:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:22.956882 | orchestrator | 2026-01-05 02:39:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:22.958601 | orchestrator | 2026-01-05 02:39:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:22.958696 | orchestrator | 2026-01-05 02:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:26.010004 | orchestrator | 2026-01-05 02:39:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:26.011481 | orchestrator | 2026-01-05 02:39:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:26.011511 | orchestrator | 2026-01-05 02:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:29.057043 | orchestrator | 2026-01-05 02:39:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:29.060828 | orchestrator | 2026-01-05 02:39:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:29.060955 | orchestrator | 2026-01-05 02:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:32.110055 | orchestrator | 2026-01-05 02:39:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:32.113184 | orchestrator | 2026-01-05 02:39:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:32.113260 | orchestrator | 2026-01-05 02:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:35.159511 | orchestrator | 2026-01-05 02:39:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:35.161468 | orchestrator | 2026-01-05 02:39:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:35.161530 | orchestrator | 2026-01-05 02:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:38.217126 | orchestrator | 2026-01-05 02:39:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:38.219707 | orchestrator | 2026-01-05 02:39:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:38.219784 | orchestrator | 2026-01-05 02:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:41.273025 | orchestrator | 2026-01-05 02:39:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:41.274458 | orchestrator | 2026-01-05 02:39:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:41.274527 | orchestrator | 2026-01-05 02:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:44.328119 | orchestrator | 2026-01-05 02:39:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:44.329780 | orchestrator | 2026-01-05 02:39:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:44.329836 | orchestrator | 2026-01-05 02:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:47.383713 | orchestrator | 2026-01-05 02:39:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:47.385346 | orchestrator | 2026-01-05 02:39:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:47.385502 | orchestrator | 2026-01-05 02:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:50.434385 | orchestrator | 2026-01-05 02:39:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:50.439426 | orchestrator | 2026-01-05 02:39:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:50.440420 | orchestrator | 2026-01-05 02:39:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:53.491066 | orchestrator | 2026-01-05 02:39:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:53.493641 | orchestrator | 2026-01-05 02:39:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:53.494109 | orchestrator | 2026-01-05 02:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:56.542090 | orchestrator | 2026-01-05 02:39:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:56.542821 | orchestrator | 2026-01-05 02:39:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:56.543300 | orchestrator | 2026-01-05 02:39:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:59.587739 | orchestrator | 2026-01-05 02:39:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:39:59.589210 | orchestrator | 2026-01-05 02:39:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:39:59.589258 | orchestrator | 2026-01-05 02:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:02.640180 | orchestrator | 2026-01-05 02:40:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:02.642881 | orchestrator | 2026-01-05 02:40:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:02.643043 | orchestrator | 2026-01-05 02:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:05.697527 | orchestrator | 2026-01-05 02:40:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:05.699224 | orchestrator | 2026-01-05 02:40:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:05.699351 | orchestrator | 2026-01-05 02:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:08.753650 | orchestrator | 2026-01-05 02:40:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:08.755430 | orchestrator | 2026-01-05 02:40:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:08.755492 | orchestrator | 2026-01-05 02:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:11.808713 | orchestrator | 2026-01-05 02:40:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:11.810486 | orchestrator | 2026-01-05 02:40:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:11.810533 | orchestrator | 2026-01-05 02:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:14.860175 | orchestrator | 2026-01-05 02:40:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:14.861606 | orchestrator | 2026-01-05 02:40:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:14.861644 | orchestrator | 2026-01-05 02:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:17.912665 | orchestrator | 2026-01-05 02:40:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:17.914857 | orchestrator | 2026-01-05 02:40:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:17.914897 | orchestrator | 2026-01-05 02:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:20.966663 | orchestrator | 2026-01-05 02:40:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:20.967951 | orchestrator | 2026-01-05 02:40:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:20.968030 | orchestrator | 2026-01-05 02:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:24.028692 | orchestrator | 2026-01-05 02:40:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:24.030571 | orchestrator | 2026-01-05 02:40:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:24.030720 | orchestrator | 2026-01-05 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:27.082381 | orchestrator | 2026-01-05 02:40:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:27.083890 | orchestrator | 2026-01-05 02:40:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:27.083965 | orchestrator | 2026-01-05 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:30.132305 | orchestrator | 2026-01-05 02:40:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:30.134310 | orchestrator | 2026-01-05 02:40:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:30.134368 | orchestrator | 2026-01-05 02:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:33.189041 | orchestrator | 2026-01-05 02:40:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:33.192574 | orchestrator | 2026-01-05 02:40:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:33.192638 | orchestrator | 2026-01-05 02:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:36.242238 | orchestrator | 2026-01-05 02:40:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:36.244830 | orchestrator | 2026-01-05 02:40:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:36.244929 | orchestrator | 2026-01-05 02:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:39.300457 | orchestrator | 2026-01-05 02:40:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:39.301310 | orchestrator | 2026-01-05 02:40:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:39.301916 | orchestrator | 2026-01-05 02:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:42.355535 | orchestrator | 2026-01-05 02:40:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:42.360064 | orchestrator | 2026-01-05 02:40:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:42.360135 | orchestrator | 2026-01-05 02:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:45.412728 | orchestrator | 2026-01-05 02:40:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:45.415849 | orchestrator | 2026-01-05 02:40:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:45.415938 | orchestrator | 2026-01-05 02:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:48.471780 | orchestrator | 2026-01-05 02:40:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:48.473693 | orchestrator | 2026-01-05 02:40:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:48.473755 | orchestrator | 2026-01-05 02:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:51.521700 | orchestrator | 2026-01-05 02:40:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:51.524658 | orchestrator | 2026-01-05 02:40:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:51.524725 | orchestrator | 2026-01-05 02:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:54.587241 | orchestrator | 2026-01-05 02:40:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:54.591353 | orchestrator | 2026-01-05 02:40:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:54.591420 | orchestrator | 2026-01-05 02:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:57.642092 | orchestrator | 2026-01-05 02:40:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:40:57.644626 | orchestrator | 2026-01-05 02:40:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:40:57.644688 | orchestrator | 2026-01-05 02:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:00.689734 | orchestrator | 2026-01-05 02:41:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:00.690287 | orchestrator | 2026-01-05 02:41:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:00.690307 | orchestrator | 2026-01-05 02:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:03.728223 | orchestrator | 2026-01-05 02:41:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:03.729007 | orchestrator | 2026-01-05 02:41:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:03.729236 | orchestrator | 2026-01-05 02:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:06.780943 | orchestrator | 2026-01-05 02:41:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:06.783334 | orchestrator | 2026-01-05 02:41:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:06.783392 | orchestrator | 2026-01-05 02:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:09.836900 | orchestrator | 2026-01-05 02:41:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:09.838821 | orchestrator | 2026-01-05 02:41:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:09.838861 | orchestrator | 2026-01-05 02:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:12.892013 | orchestrator | 2026-01-05 02:41:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:12.892911 | orchestrator | 2026-01-05 02:41:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:12.892950 | orchestrator | 2026-01-05 02:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:15.951812 | orchestrator | 2026-01-05 02:41:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:15.954131 | orchestrator | 2026-01-05 02:41:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:15.954256 | orchestrator | 2026-01-05 02:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:19.024048 | orchestrator | 2026-01-05 02:41:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:19.027681 | orchestrator | 2026-01-05 02:41:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:19.027767 | orchestrator | 2026-01-05 02:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:22.075939 | orchestrator | 2026-01-05 02:41:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:22.076727 | orchestrator | 2026-01-05 02:41:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:22.076800 | orchestrator | 2026-01-05 02:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:25.129778 | orchestrator | 2026-01-05 02:41:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:25.131879 | orchestrator | 2026-01-05 02:41:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:25.131937 | orchestrator | 2026-01-05 02:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:28.187999 | orchestrator | 2026-01-05 02:41:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:28.191022 | orchestrator | 2026-01-05 02:41:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:28.191115 | orchestrator | 2026-01-05 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:31.247275 | orchestrator | 2026-01-05 02:41:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:31.250091 | orchestrator | 2026-01-05 02:41:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:31.250149 | orchestrator | 2026-01-05 02:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:34.304057 | orchestrator | 2026-01-05 02:41:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:34.307576 | orchestrator | 2026-01-05 02:41:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:34.307638 | orchestrator | 2026-01-05 02:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:37.358452 | orchestrator | 2026-01-05 02:41:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:37.360624 | orchestrator | 2026-01-05 02:41:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:37.360675 | orchestrator | 2026-01-05 02:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:40.419487 | orchestrator | 2026-01-05 02:41:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:40.420621 | orchestrator | 2026-01-05 02:41:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:40.420704 | orchestrator | 2026-01-05 02:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:43.473120 | orchestrator | 2026-01-05 02:41:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:43.474161 | orchestrator | 2026-01-05 02:41:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:43.474177 | orchestrator | 2026-01-05 02:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:46.524583 | orchestrator | 2026-01-05 02:41:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:46.526279 | orchestrator | 2026-01-05 02:41:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:46.526334 | orchestrator | 2026-01-05 02:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:49.577781 | orchestrator | 2026-01-05 02:41:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:49.579291 | orchestrator | 2026-01-05 02:41:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:49.579359 | orchestrator | 2026-01-05 02:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:52.627754 | orchestrator | 2026-01-05 02:41:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:52.631800 | orchestrator | 2026-01-05 02:41:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:52.631871 | orchestrator | 2026-01-05 02:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:55.686791 | orchestrator | 2026-01-05 02:41:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:55.687303 | orchestrator | 2026-01-05 02:41:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:55.687320 | orchestrator | 2026-01-05 02:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:58.736230 | orchestrator | 2026-01-05 02:41:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:41:58.737571 | orchestrator | 2026-01-05 02:41:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:41:58.737618 | orchestrator | 2026-01-05 02:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:01.789445 | orchestrator | 2026-01-05 02:42:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:01.791182 | orchestrator | 2026-01-05 02:42:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:01.791232 | orchestrator | 2026-01-05 02:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:04.840904 | orchestrator | 2026-01-05 02:42:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:04.842772 | orchestrator | 2026-01-05 02:42:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:04.842852 | orchestrator | 2026-01-05 02:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:07.889805 | orchestrator | 2026-01-05 02:42:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:07.891711 | orchestrator | 2026-01-05 02:42:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:07.891874 | orchestrator | 2026-01-05 02:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:10.942635 | orchestrator | 2026-01-05 02:42:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:10.943879 | orchestrator | 2026-01-05 02:42:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:10.943936 | orchestrator | 2026-01-05 02:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:13.993032 | orchestrator | 2026-01-05 02:42:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:13.994952 | orchestrator | 2026-01-05 02:42:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:13.994983 | orchestrator | 2026-01-05 02:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:17.032895 | orchestrator | 2026-01-05 02:42:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:17.034581 | orchestrator | 2026-01-05 02:42:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:17.034743 | orchestrator | 2026-01-05 02:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:20.086441 | orchestrator | 2026-01-05 02:42:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:20.087913 | orchestrator | 2026-01-05 02:42:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:20.087970 | orchestrator | 2026-01-05 02:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:23.140050 | orchestrator | 2026-01-05 02:42:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:23.143046 | orchestrator | 2026-01-05 02:42:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:23.143109 | orchestrator | 2026-01-05 02:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:26.189090 | orchestrator | 2026-01-05 02:42:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:26.191238 | orchestrator | 2026-01-05 02:42:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:26.191296 | orchestrator | 2026-01-05 02:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:29.238082 | orchestrator | 2026-01-05 02:42:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:29.239932 | orchestrator | 2026-01-05 02:42:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:29.239971 | orchestrator | 2026-01-05 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:32.291821 | orchestrator | 2026-01-05 02:42:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:32.295097 | orchestrator | 2026-01-05 02:42:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:32.295187 | orchestrator | 2026-01-05 02:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:35.352394 | orchestrator | 2026-01-05 02:42:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:35.354199 | orchestrator | 2026-01-05 02:42:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:35.354347 | orchestrator | 2026-01-05 02:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:38.402304 | orchestrator | 2026-01-05 02:42:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:38.404534 | orchestrator | 2026-01-05 02:42:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:38.404568 | orchestrator | 2026-01-05 02:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:41.455095 | orchestrator | 2026-01-05 02:42:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:41.458237 | orchestrator | 2026-01-05 02:42:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:41.458306 | orchestrator | 2026-01-05 02:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:44.512068 | orchestrator | 2026-01-05 02:42:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:44.513744 | orchestrator | 2026-01-05 02:42:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:44.513791 | orchestrator | 2026-01-05 02:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:47.562663 | orchestrator | 2026-01-05 02:42:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:47.564756 | orchestrator | 2026-01-05 02:42:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:47.564789 | orchestrator | 2026-01-05 02:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:50.617551 | orchestrator | 2026-01-05 02:42:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:50.619715 | orchestrator | 2026-01-05 02:42:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:50.619752 | orchestrator | 2026-01-05 02:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:53.671685 | orchestrator | 2026-01-05 02:42:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:53.672764 | orchestrator | 2026-01-05 02:42:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:53.672863 | orchestrator | 2026-01-05 02:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:56.722827 | orchestrator | 2026-01-05 02:42:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:56.724821 | orchestrator | 2026-01-05 02:42:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:56.724863 | orchestrator | 2026-01-05 02:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:59.773190 | orchestrator | 2026-01-05 02:42:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:42:59.775350 | orchestrator | 2026-01-05 02:42:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:42:59.775422 | orchestrator | 2026-01-05 02:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:02.826960 | orchestrator | 2026-01-05 02:43:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:02.828516 | orchestrator | 2026-01-05 02:43:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:02.828563 | orchestrator | 2026-01-05 02:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:05.878564 | orchestrator | 2026-01-05 02:43:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:05.881116 | orchestrator | 2026-01-05 02:43:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:05.881191 | orchestrator | 2026-01-05 02:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:08.927743 | orchestrator | 2026-01-05 02:43:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:08.929630 | orchestrator | 2026-01-05 02:43:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:08.929696 | orchestrator | 2026-01-05 02:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:11.983249 | orchestrator | 2026-01-05 02:43:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:11.985909 | orchestrator | 2026-01-05 02:43:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:11.985996 | orchestrator | 2026-01-05 02:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:15.044945 | orchestrator | 2026-01-05 02:43:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:15.046925 | orchestrator | 2026-01-05 02:43:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:15.046976 | orchestrator | 2026-01-05 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:18.100199 | orchestrator | 2026-01-05 02:43:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:18.101474 | orchestrator | 2026-01-05 02:43:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:18.101513 | orchestrator | 2026-01-05 02:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:21.150365 | orchestrator | 2026-01-05 02:43:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:21.152254 | orchestrator | 2026-01-05 02:43:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:21.152320 | orchestrator | 2026-01-05 02:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:24.187113 | orchestrator | 2026-01-05 02:43:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:24.188429 | orchestrator | 2026-01-05 02:43:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:24.188464 | orchestrator | 2026-01-05 02:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:27.245768 | orchestrator | 2026-01-05 02:43:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:27.246869 | orchestrator | 2026-01-05 02:43:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:27.246949 | orchestrator | 2026-01-05 02:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:30.293108 | orchestrator | 2026-01-05 02:43:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:30.294849 | orchestrator | 2026-01-05 02:43:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:30.294908 | orchestrator | 2026-01-05 02:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:33.337555 | orchestrator | 2026-01-05 02:43:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:33.339018 | orchestrator | 2026-01-05 02:43:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:33.339102 | orchestrator | 2026-01-05 02:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:36.395890 | orchestrator | 2026-01-05 02:43:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:36.398278 | orchestrator | 2026-01-05 02:43:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:36.398356 | orchestrator | 2026-01-05 02:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:39.440409 | orchestrator | 2026-01-05 02:43:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:39.441453 | orchestrator | 2026-01-05 02:43:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:39.441487 | orchestrator | 2026-01-05 02:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:42.489749 | orchestrator | 2026-01-05 02:43:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:42.495295 | orchestrator | 2026-01-05 02:43:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:42.495472 | orchestrator | 2026-01-05 02:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:45.548143 | orchestrator | 2026-01-05 02:43:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:45.550643 | orchestrator | 2026-01-05 02:43:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:45.550752 | orchestrator | 2026-01-05 02:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:48.607911 | orchestrator | 2026-01-05 02:43:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:48.610105 | orchestrator | 2026-01-05 02:43:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:48.610218 | orchestrator | 2026-01-05 02:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:51.658913 | orchestrator | 2026-01-05 02:43:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:51.661833 | orchestrator | 2026-01-05 02:43:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:51.661911 | orchestrator | 2026-01-05 02:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:54.707024 | orchestrator | 2026-01-05 02:43:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:54.707586 | orchestrator | 2026-01-05 02:43:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:54.707632 | orchestrator | 2026-01-05 02:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:57.760149 | orchestrator | 2026-01-05 02:43:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:43:57.761844 | orchestrator | 2026-01-05 02:43:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:43:57.761900 | orchestrator | 2026-01-05 02:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:00.809862 | orchestrator | 2026-01-05 02:44:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:00.811674 | orchestrator | 2026-01-05 02:44:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:00.811788 | orchestrator | 2026-01-05 02:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:03.866006 | orchestrator | 2026-01-05 02:44:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:03.868632 | orchestrator | 2026-01-05 02:44:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:03.868711 | orchestrator | 2026-01-05 02:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:06.922518 | orchestrator | 2026-01-05 02:44:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:06.925389 | orchestrator | 2026-01-05 02:44:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:06.925452 | orchestrator | 2026-01-05 02:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:09.979040 | orchestrator | 2026-01-05 02:44:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:09.980879 | orchestrator | 2026-01-05 02:44:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:09.980931 | orchestrator | 2026-01-05 02:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:13.033878 | orchestrator | 2026-01-05 02:44:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:13.034925 | orchestrator | 2026-01-05 02:44:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:13.034982 | orchestrator | 2026-01-05 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:16.082426 | orchestrator | 2026-01-05 02:44:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:16.084043 | orchestrator | 2026-01-05 02:44:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:16.084102 | orchestrator | 2026-01-05 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:19.134447 | orchestrator | 2026-01-05 02:44:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:19.137980 | orchestrator | 2026-01-05 02:44:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:19.138108 | orchestrator | 2026-01-05 02:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:22.188139 | orchestrator | 2026-01-05 02:44:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:22.191820 | orchestrator | 2026-01-05 02:44:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:22.191905 | orchestrator | 2026-01-05 02:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:25.239257 | orchestrator | 2026-01-05 02:44:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:25.241273 | orchestrator | 2026-01-05 02:44:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:25.241346 | orchestrator | 2026-01-05 02:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:28.294760 | orchestrator | 2026-01-05 02:44:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:28.298686 | orchestrator | 2026-01-05 02:44:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:28.298753 | orchestrator | 2026-01-05 02:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:31.352632 | orchestrator | 2026-01-05 02:44:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:31.354637 | orchestrator | 2026-01-05 02:44:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:31.354705 | orchestrator | 2026-01-05 02:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:34.407297 | orchestrator | 2026-01-05 02:44:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:34.409339 | orchestrator | 2026-01-05 02:44:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:34.409461 | orchestrator | 2026-01-05 02:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:37.460101 | orchestrator | 2026-01-05 02:44:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:37.462093 | orchestrator | 2026-01-05 02:44:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:37.462166 | orchestrator | 2026-01-05 02:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:40.516596 | orchestrator | 2026-01-05 02:44:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:40.519280 | orchestrator | 2026-01-05 02:44:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:40.519385 | orchestrator | 2026-01-05 02:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:43.572067 | orchestrator | 2026-01-05 02:44:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:43.573396 | orchestrator | 2026-01-05 02:44:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:43.573444 | orchestrator | 2026-01-05 02:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:46.617345 | orchestrator | 2026-01-05 02:44:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:46.620775 | orchestrator | 2026-01-05 02:44:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:46.620854 | orchestrator | 2026-01-05 02:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:49.673987 | orchestrator | 2026-01-05 02:44:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:49.676031 | orchestrator | 2026-01-05 02:44:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:49.676077 | orchestrator | 2026-01-05 02:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:52.731140 | orchestrator | 2026-01-05 02:44:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:52.733714 | orchestrator | 2026-01-05 02:44:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:52.733868 | orchestrator | 2026-01-05 02:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:55.784474 | orchestrator | 2026-01-05 02:44:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:55.786945 | orchestrator | 2026-01-05 02:44:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:55.787005 | orchestrator | 2026-01-05 02:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:58.845252 | orchestrator | 2026-01-05 02:44:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:44:58.848331 | orchestrator | 2026-01-05 02:44:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:44:58.848401 | orchestrator | 2026-01-05 02:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:01.897456 | orchestrator | 2026-01-05 02:45:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:01.899389 | orchestrator | 2026-01-05 02:45:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:01.899510 | orchestrator | 2026-01-05 02:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:04.952470 | orchestrator | 2026-01-05 02:45:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:04.953964 | orchestrator | 2026-01-05 02:45:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:04.954010 | orchestrator | 2026-01-05 02:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:08.013688 | orchestrator | 2026-01-05 02:45:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:08.014879 | orchestrator | 2026-01-05 02:45:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:08.015181 | orchestrator | 2026-01-05 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:11.059878 | orchestrator | 2026-01-05 02:45:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:11.068056 | orchestrator | 2026-01-05 02:45:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:11.068160 | orchestrator | 2026-01-05 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:14.112515 | orchestrator | 2026-01-05 02:45:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:14.115357 | orchestrator | 2026-01-05 02:45:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:14.115427 | orchestrator | 2026-01-05 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:17.160400 | orchestrator | 2026-01-05 02:45:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:17.162192 | orchestrator | 2026-01-05 02:45:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:17.162285 | orchestrator | 2026-01-05 02:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:20.207950 | orchestrator | 2026-01-05 02:45:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:20.210359 | orchestrator | 2026-01-05 02:45:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:20.210431 | orchestrator | 2026-01-05 02:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:23.257118 | orchestrator | 2026-01-05 02:45:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:23.262249 | orchestrator | 2026-01-05 02:45:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:23.262405 | orchestrator | 2026-01-05 02:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:26.320112 | orchestrator | 2026-01-05 02:45:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:26.320893 | orchestrator | 2026-01-05 02:45:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:26.320918 | orchestrator | 2026-01-05 02:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:29.374478 | orchestrator | 2026-01-05 02:45:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:29.376566 | orchestrator | 2026-01-05 02:45:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:29.376628 | orchestrator | 2026-01-05 02:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:32.432445 | orchestrator | 2026-01-05 02:45:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:32.433771 | orchestrator | 2026-01-05 02:45:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:32.433858 | orchestrator | 2026-01-05 02:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:35.485960 | orchestrator | 2026-01-05 02:45:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:35.487609 | orchestrator | 2026-01-05 02:45:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:35.487718 | orchestrator | 2026-01-05 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:38.540757 | orchestrator | 2026-01-05 02:45:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:38.542585 | orchestrator | 2026-01-05 02:45:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:38.542659 | orchestrator | 2026-01-05 02:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:41.588512 | orchestrator | 2026-01-05 02:45:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:41.591862 | orchestrator | 2026-01-05 02:45:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:41.591960 | orchestrator | 2026-01-05 02:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:44.650909 | orchestrator | 2026-01-05 02:45:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:44.652643 | orchestrator | 2026-01-05 02:45:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:44.652717 | orchestrator | 2026-01-05 02:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:47.708940 | orchestrator | 2026-01-05 02:45:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:47.711621 | orchestrator | 2026-01-05 02:45:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:47.711684 | orchestrator | 2026-01-05 02:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:50.763886 | orchestrator | 2026-01-05 02:45:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:50.766201 | orchestrator | 2026-01-05 02:45:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:50.766392 | orchestrator | 2026-01-05 02:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:53.817061 | orchestrator | 2026-01-05 02:45:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:53.819748 | orchestrator | 2026-01-05 02:45:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:53.819850 | orchestrator | 2026-01-05 02:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:56.870728 | orchestrator | 2026-01-05 02:45:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:56.873123 | orchestrator | 2026-01-05 02:45:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:56.873209 | orchestrator | 2026-01-05 02:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:59.927900 | orchestrator | 2026-01-05 02:45:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:45:59.932511 | orchestrator | 2026-01-05 02:45:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:45:59.932593 | orchestrator | 2026-01-05 02:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:02.989615 | orchestrator | 2026-01-05 02:46:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:02.991460 | orchestrator | 2026-01-05 02:46:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:02.991535 | orchestrator | 2026-01-05 02:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:06.046541 | orchestrator | 2026-01-05 02:46:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:06.048111 | orchestrator | 2026-01-05 02:46:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:06.048169 | orchestrator | 2026-01-05 02:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:09.105721 | orchestrator | 2026-01-05 02:46:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:09.107108 | orchestrator | 2026-01-05 02:46:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:09.107145 | orchestrator | 2026-01-05 02:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:12.159801 | orchestrator | 2026-01-05 02:46:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:12.161670 | orchestrator | 2026-01-05 02:46:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:12.161828 | orchestrator | 2026-01-05 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:15.215216 | orchestrator | 2026-01-05 02:46:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:15.216815 | orchestrator | 2026-01-05 02:46:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:15.217558 | orchestrator | 2026-01-05 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:18.278305 | orchestrator | 2026-01-05 02:46:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:18.280253 | orchestrator | 2026-01-05 02:46:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:18.280302 | orchestrator | 2026-01-05 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:21.339750 | orchestrator | 2026-01-05 02:46:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:21.342054 | orchestrator | 2026-01-05 02:46:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:21.342176 | orchestrator | 2026-01-05 02:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:24.394236 | orchestrator | 2026-01-05 02:46:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:24.395557 | orchestrator | 2026-01-05 02:46:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:24.395678 | orchestrator | 2026-01-05 02:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:27.442809 | orchestrator | 2026-01-05 02:46:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:27.443091 | orchestrator | 2026-01-05 02:46:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:27.443109 | orchestrator | 2026-01-05 02:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:30.489427 | orchestrator | 2026-01-05 02:46:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:30.491209 | orchestrator | 2026-01-05 02:46:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:30.491281 | orchestrator | 2026-01-05 02:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:33.535710 | orchestrator | 2026-01-05 02:46:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:33.538983 | orchestrator | 2026-01-05 02:46:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:33.539180 | orchestrator | 2026-01-05 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:36.588688 | orchestrator | 2026-01-05 02:46:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:36.593036 | orchestrator | 2026-01-05 02:46:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:36.593126 | orchestrator | 2026-01-05 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:39.638453 | orchestrator | 2026-01-05 02:46:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:39.639759 | orchestrator | 2026-01-05 02:46:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:39.639802 | orchestrator | 2026-01-05 02:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:42.688124 | orchestrator | 2026-01-05 02:46:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:42.688560 | orchestrator | 2026-01-05 02:46:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:42.688588 | orchestrator | 2026-01-05 02:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:45.747493 | orchestrator | 2026-01-05 02:46:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:45.748009 | orchestrator | 2026-01-05 02:46:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:45.748022 | orchestrator | 2026-01-05 02:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:48.792707 | orchestrator | 2026-01-05 02:46:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:48.794583 | orchestrator | 2026-01-05 02:46:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:48.794651 | orchestrator | 2026-01-05 02:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:51.844577 | orchestrator | 2026-01-05 02:46:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:51.845752 | orchestrator | 2026-01-05 02:46:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:51.845782 | orchestrator | 2026-01-05 02:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:54.898406 | orchestrator | 2026-01-05 02:46:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:54.899975 | orchestrator | 2026-01-05 02:46:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:54.900047 | orchestrator | 2026-01-05 02:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:57.946887 | orchestrator | 2026-01-05 02:46:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:46:57.948561 | orchestrator | 2026-01-05 02:46:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:46:57.948601 | orchestrator | 2026-01-05 02:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:00.991946 | orchestrator | 2026-01-05 02:47:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:00.993506 | orchestrator | 2026-01-05 02:47:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:00.993624 | orchestrator | 2026-01-05 02:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:04.043485 | orchestrator | 2026-01-05 02:47:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:04.045042 | orchestrator | 2026-01-05 02:47:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:04.045116 | orchestrator | 2026-01-05 02:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:07.097467 | orchestrator | 2026-01-05 02:47:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:07.098997 | orchestrator | 2026-01-05 02:47:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:07.099039 | orchestrator | 2026-01-05 02:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:10.145529 | orchestrator | 2026-01-05 02:47:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:10.147019 | orchestrator | 2026-01-05 02:47:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:10.147065 | orchestrator | 2026-01-05 02:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:13.194376 | orchestrator | 2026-01-05 02:47:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:13.194418 | orchestrator | 2026-01-05 02:47:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:13.194424 | orchestrator | 2026-01-05 02:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:16.241449 | orchestrator | 2026-01-05 02:47:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:16.245552 | orchestrator | 2026-01-05 02:47:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:16.245680 | orchestrator | 2026-01-05 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:19.306457 | orchestrator | 2026-01-05 02:47:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:19.306604 | orchestrator | 2026-01-05 02:47:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:19.307284 | orchestrator | 2026-01-05 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:22.351972 | orchestrator | 2026-01-05 02:47:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:22.353060 | orchestrator | 2026-01-05 02:47:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:22.353088 | orchestrator | 2026-01-05 02:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:25.412043 | orchestrator | 2026-01-05 02:47:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:25.412212 | orchestrator | 2026-01-05 02:47:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:25.412233 | orchestrator | 2026-01-05 02:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:28.467989 | orchestrator | 2026-01-05 02:47:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:28.470113 | orchestrator | 2026-01-05 02:47:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:28.470249 | orchestrator | 2026-01-05 02:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:31.516054 | orchestrator | 2026-01-05 02:47:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:31.518555 | orchestrator | 2026-01-05 02:47:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:31.518754 | orchestrator | 2026-01-05 02:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:34.570072 | orchestrator | 2026-01-05 02:47:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:34.571773 | orchestrator | 2026-01-05 02:47:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:34.571836 | orchestrator | 2026-01-05 02:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:37.621341 | orchestrator | 2026-01-05 02:47:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:37.623990 | orchestrator | 2026-01-05 02:47:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:37.624222 | orchestrator | 2026-01-05 02:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:40.671855 | orchestrator | 2026-01-05 02:47:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:40.674171 | orchestrator | 2026-01-05 02:47:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:40.674218 | orchestrator | 2026-01-05 02:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:43.722556 | orchestrator | 2026-01-05 02:47:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:43.724446 | orchestrator | 2026-01-05 02:47:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:43.724520 | orchestrator | 2026-01-05 02:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:46.772693 | orchestrator | 2026-01-05 02:47:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:46.774855 | orchestrator | 2026-01-05 02:47:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:46.774998 | orchestrator | 2026-01-05 02:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:49.826672 | orchestrator | 2026-01-05 02:47:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:49.828995 | orchestrator | 2026-01-05 02:47:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:49.829085 | orchestrator | 2026-01-05 02:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:52.880065 | orchestrator | 2026-01-05 02:47:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:52.881317 | orchestrator | 2026-01-05 02:47:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:52.881532 | orchestrator | 2026-01-05 02:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:55.931489 | orchestrator | 2026-01-05 02:47:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:55.933699 | orchestrator | 2026-01-05 02:47:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:55.933738 | orchestrator | 2026-01-05 02:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:58.985060 | orchestrator | 2026-01-05 02:47:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:47:58.986995 | orchestrator | 2026-01-05 02:47:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:47:58.987035 | orchestrator | 2026-01-05 02:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:02.039998 | orchestrator | 2026-01-05 02:48:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:02.043742 | orchestrator | 2026-01-05 02:48:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:02.043816 | orchestrator | 2026-01-05 02:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:05.094316 | orchestrator | 2026-01-05 02:48:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:05.096100 | orchestrator | 2026-01-05 02:48:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:05.096203 | orchestrator | 2026-01-05 02:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:08.139801 | orchestrator | 2026-01-05 02:48:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:08.142222 | orchestrator | 2026-01-05 02:48:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:08.142268 | orchestrator | 2026-01-05 02:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:11.194655 | orchestrator | 2026-01-05 02:48:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:11.195967 | orchestrator | 2026-01-05 02:48:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:11.196007 | orchestrator | 2026-01-05 02:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:14.238891 | orchestrator | 2026-01-05 02:48:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:14.241068 | orchestrator | 2026-01-05 02:48:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:14.241117 | orchestrator | 2026-01-05 02:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:17.282213 | orchestrator | 2026-01-05 02:48:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:17.285914 | orchestrator | 2026-01-05 02:48:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:17.286199 | orchestrator | 2026-01-05 02:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:20.331100 | orchestrator | 2026-01-05 02:48:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:20.332277 | orchestrator | 2026-01-05 02:48:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:20.332309 | orchestrator | 2026-01-05 02:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:23.387797 | orchestrator | 2026-01-05 02:48:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:23.389483 | orchestrator | 2026-01-05 02:48:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:23.389768 | orchestrator | 2026-01-05 02:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:26.437451 | orchestrator | 2026-01-05 02:48:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:26.438097 | orchestrator | 2026-01-05 02:48:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:26.439553 | orchestrator | 2026-01-05 02:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:29.503646 | orchestrator | 2026-01-05 02:48:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:29.505483 | orchestrator | 2026-01-05 02:48:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:29.505635 | orchestrator | 2026-01-05 02:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:32.548904 | orchestrator | 2026-01-05 02:48:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:32.550760 | orchestrator | 2026-01-05 02:48:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:32.550832 | orchestrator | 2026-01-05 02:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:35.598087 | orchestrator | 2026-01-05 02:48:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:35.599817 | orchestrator | 2026-01-05 02:48:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:35.599878 | orchestrator | 2026-01-05 02:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:38.654269 | orchestrator | 2026-01-05 02:48:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:38.655351 | orchestrator | 2026-01-05 02:48:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:38.655400 | orchestrator | 2026-01-05 02:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:41.706936 | orchestrator | 2026-01-05 02:48:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:41.710078 | orchestrator | 2026-01-05 02:48:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:41.710181 | orchestrator | 2026-01-05 02:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:44.758611 | orchestrator | 2026-01-05 02:48:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:44.760708 | orchestrator | 2026-01-05 02:48:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:44.760766 | orchestrator | 2026-01-05 02:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:47.814653 | orchestrator | 2026-01-05 02:48:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:47.816616 | orchestrator | 2026-01-05 02:48:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:47.816698 | orchestrator | 2026-01-05 02:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:50.869154 | orchestrator | 2026-01-05 02:48:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:50.870886 | orchestrator | 2026-01-05 02:48:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:50.870952 | orchestrator | 2026-01-05 02:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:53.915945 | orchestrator | 2026-01-05 02:48:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:53.916403 | orchestrator | 2026-01-05 02:48:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:53.916604 | orchestrator | 2026-01-05 02:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:56.960862 | orchestrator | 2026-01-05 02:48:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:48:56.962303 | orchestrator | 2026-01-05 02:48:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:48:56.962359 | orchestrator | 2026-01-05 02:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:00.016018 | orchestrator | 2026-01-05 02:49:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:00.017891 | orchestrator | 2026-01-05 02:49:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:00.017976 | orchestrator | 2026-01-05 02:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:03.061286 | orchestrator | 2026-01-05 02:49:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:03.061693 | orchestrator | 2026-01-05 02:49:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:03.061721 | orchestrator | 2026-01-05 02:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:06.107329 | orchestrator | 2026-01-05 02:49:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:06.109276 | orchestrator | 2026-01-05 02:49:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:06.109350 | orchestrator | 2026-01-05 02:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:09.158970 | orchestrator | 2026-01-05 02:49:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:09.160324 | orchestrator | 2026-01-05 02:49:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:09.160376 | orchestrator | 2026-01-05 02:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:12.202550 | orchestrator | 2026-01-05 02:49:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:12.203191 | orchestrator | 2026-01-05 02:49:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:12.203215 | orchestrator | 2026-01-05 02:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:15.255380 | orchestrator | 2026-01-05 02:49:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:15.258135 | orchestrator | 2026-01-05 02:49:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:15.258229 | orchestrator | 2026-01-05 02:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:18.312348 | orchestrator | 2026-01-05 02:49:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:18.314917 | orchestrator | 2026-01-05 02:49:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:18.315004 | orchestrator | 2026-01-05 02:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:21.370668 | orchestrator | 2026-01-05 02:49:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:21.371773 | orchestrator | 2026-01-05 02:49:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:21.371873 | orchestrator | 2026-01-05 02:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:24.422156 | orchestrator | 2026-01-05 02:49:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:24.423871 | orchestrator | 2026-01-05 02:49:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:24.423934 | orchestrator | 2026-01-05 02:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:27.472698 | orchestrator | 2026-01-05 02:49:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:27.474354 | orchestrator | 2026-01-05 02:49:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:27.474399 | orchestrator | 2026-01-05 02:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:30.514864 | orchestrator | 2026-01-05 02:49:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:30.516274 | orchestrator | 2026-01-05 02:49:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:30.516331 | orchestrator | 2026-01-05 02:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:33.566051 | orchestrator | 2026-01-05 02:49:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:33.568553 | orchestrator | 2026-01-05 02:49:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:33.568628 | orchestrator | 2026-01-05 02:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:36.618888 | orchestrator | 2026-01-05 02:49:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:36.620603 | orchestrator | 2026-01-05 02:49:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:36.620643 | orchestrator | 2026-01-05 02:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:39.661537 | orchestrator | 2026-01-05 02:49:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:39.664104 | orchestrator | 2026-01-05 02:49:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:39.664175 | orchestrator | 2026-01-05 02:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:42.710484 | orchestrator | 2026-01-05 02:49:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:42.712290 | orchestrator | 2026-01-05 02:49:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:42.712334 | orchestrator | 2026-01-05 02:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:45.766105 | orchestrator | 2026-01-05 02:49:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:45.767013 | orchestrator | 2026-01-05 02:49:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:45.767153 | orchestrator | 2026-01-05 02:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:48.810846 | orchestrator | 2026-01-05 02:49:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:48.812382 | orchestrator | 2026-01-05 02:49:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:48.812497 | orchestrator | 2026-01-05 02:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:51.865075 | orchestrator | 2026-01-05 02:49:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:51.867189 | orchestrator | 2026-01-05 02:49:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:51.867245 | orchestrator | 2026-01-05 02:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:54.920860 | orchestrator | 2026-01-05 02:49:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:54.923042 | orchestrator | 2026-01-05 02:49:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:54.923135 | orchestrator | 2026-01-05 02:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:57.974315 | orchestrator | 2026-01-05 02:49:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:49:57.976950 | orchestrator | 2026-01-05 02:49:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:49:57.977065 | orchestrator | 2026-01-05 02:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:01.021762 | orchestrator | 2026-01-05 02:50:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:01.025449 | orchestrator | 2026-01-05 02:50:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:01.025542 | orchestrator | 2026-01-05 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:04.072168 | orchestrator | 2026-01-05 02:50:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:04.072873 | orchestrator | 2026-01-05 02:50:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:04.072931 | orchestrator | 2026-01-05 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:07.126756 | orchestrator | 2026-01-05 02:50:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:07.128888 | orchestrator | 2026-01-05 02:50:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:07.128945 | orchestrator | 2026-01-05 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:10.187620 | orchestrator | 2026-01-05 02:50:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:10.188823 | orchestrator | 2026-01-05 02:50:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:10.188868 | orchestrator | 2026-01-05 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:13.243987 | orchestrator | 2026-01-05 02:50:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:13.245560 | orchestrator | 2026-01-05 02:50:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:13.245642 | orchestrator | 2026-01-05 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:16.297748 | orchestrator | 2026-01-05 02:50:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:16.299272 | orchestrator | 2026-01-05 02:50:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:16.299324 | orchestrator | 2026-01-05 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:19.356016 | orchestrator | 2026-01-05 02:50:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:19.358421 | orchestrator | 2026-01-05 02:50:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:19.358546 | orchestrator | 2026-01-05 02:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:22.414237 | orchestrator | 2026-01-05 02:50:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:22.416217 | orchestrator | 2026-01-05 02:50:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:22.416258 | orchestrator | 2026-01-05 02:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:25.474263 | orchestrator | 2026-01-05 02:50:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:25.477387 | orchestrator | 2026-01-05 02:50:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:25.477533 | orchestrator | 2026-01-05 02:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:28.521811 | orchestrator | 2026-01-05 02:50:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:28.526048 | orchestrator | 2026-01-05 02:50:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:28.526121 | orchestrator | 2026-01-05 02:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:31.581663 | orchestrator | 2026-01-05 02:50:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:31.586112 | orchestrator | 2026-01-05 02:50:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:31.586186 | orchestrator | 2026-01-05 02:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:34.634522 | orchestrator | 2026-01-05 02:50:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:34.636514 | orchestrator | 2026-01-05 02:50:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:34.636552 | orchestrator | 2026-01-05 02:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:37.691305 | orchestrator | 2026-01-05 02:50:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:37.692853 | orchestrator | 2026-01-05 02:50:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:37.692893 | orchestrator | 2026-01-05 02:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:40.752918 | orchestrator | 2026-01-05 02:50:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:40.755994 | orchestrator | 2026-01-05 02:50:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:40.756085 | orchestrator | 2026-01-05 02:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:43.805218 | orchestrator | 2026-01-05 02:50:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:43.806984 | orchestrator | 2026-01-05 02:50:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:43.807048 | orchestrator | 2026-01-05 02:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:46.864196 | orchestrator | 2026-01-05 02:50:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:46.867212 | orchestrator | 2026-01-05 02:50:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:46.867267 | orchestrator | 2026-01-05 02:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:49.912997 | orchestrator | 2026-01-05 02:50:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:49.918290 | orchestrator | 2026-01-05 02:50:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:49.918349 | orchestrator | 2026-01-05 02:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:52.971382 | orchestrator | 2026-01-05 02:50:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:52.971725 | orchestrator | 2026-01-05 02:50:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:52.971767 | orchestrator | 2026-01-05 02:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:56.020341 | orchestrator | 2026-01-05 02:50:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:56.023074 | orchestrator | 2026-01-05 02:50:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:56.023134 | orchestrator | 2026-01-05 02:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:59.069728 | orchestrator | 2026-01-05 02:50:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:50:59.071361 | orchestrator | 2026-01-05 02:50:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:50:59.071434 | orchestrator | 2026-01-05 02:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:02.125605 | orchestrator | 2026-01-05 02:51:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:02.127066 | orchestrator | 2026-01-05 02:51:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:02.127098 | orchestrator | 2026-01-05 02:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:05.177641 | orchestrator | 2026-01-05 02:51:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:05.178977 | orchestrator | 2026-01-05 02:51:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:05.179035 | orchestrator | 2026-01-05 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:08.227434 | orchestrator | 2026-01-05 02:51:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:08.229204 | orchestrator | 2026-01-05 02:51:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:08.229276 | orchestrator | 2026-01-05 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:11.280942 | orchestrator | 2026-01-05 02:51:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:11.282345 | orchestrator | 2026-01-05 02:51:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:11.282623 | orchestrator | 2026-01-05 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:14.328576 | orchestrator | 2026-01-05 02:51:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:14.329235 | orchestrator | 2026-01-05 02:51:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:14.329273 | orchestrator | 2026-01-05 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:17.373398 | orchestrator | 2026-01-05 02:51:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:17.375254 | orchestrator | 2026-01-05 02:51:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:17.375352 | orchestrator | 2026-01-05 02:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:20.420904 | orchestrator | 2026-01-05 02:51:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:20.422905 | orchestrator | 2026-01-05 02:51:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:20.422979 | orchestrator | 2026-01-05 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:23.474288 | orchestrator | 2026-01-05 02:51:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:23.477258 | orchestrator | 2026-01-05 02:51:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:23.477385 | orchestrator | 2026-01-05 02:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:26.526095 | orchestrator | 2026-01-05 02:51:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:26.526853 | orchestrator | 2026-01-05 02:51:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:26.526899 | orchestrator | 2026-01-05 02:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:29.583182 | orchestrator | 2026-01-05 02:51:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:29.584376 | orchestrator | 2026-01-05 02:51:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:29.584428 | orchestrator | 2026-01-05 02:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:32.630339 | orchestrator | 2026-01-05 02:51:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:32.631162 | orchestrator | 2026-01-05 02:51:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:32.631227 | orchestrator | 2026-01-05 02:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:35.680220 | orchestrator | 2026-01-05 02:51:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:35.682965 | orchestrator | 2026-01-05 02:51:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:35.683036 | orchestrator | 2026-01-05 02:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:38.735644 | orchestrator | 2026-01-05 02:51:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:38.737054 | orchestrator | 2026-01-05 02:51:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:38.737087 | orchestrator | 2026-01-05 02:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:41.782353 | orchestrator | 2026-01-05 02:51:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:41.784675 | orchestrator | 2026-01-05 02:51:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:41.784736 | orchestrator | 2026-01-05 02:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:44.835874 | orchestrator | 2026-01-05 02:51:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:44.837794 | orchestrator | 2026-01-05 02:51:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:44.837847 | orchestrator | 2026-01-05 02:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:47.888608 | orchestrator | 2026-01-05 02:51:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:47.890326 | orchestrator | 2026-01-05 02:51:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:47.890394 | orchestrator | 2026-01-05 02:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:50.935961 | orchestrator | 2026-01-05 02:51:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:50.937756 | orchestrator | 2026-01-05 02:51:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:50.937796 | orchestrator | 2026-01-05 02:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:53.984639 | orchestrator | 2026-01-05 02:51:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:53.986425 | orchestrator | 2026-01-05 02:51:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:53.986465 | orchestrator | 2026-01-05 02:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:57.039036 | orchestrator | 2026-01-05 02:51:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:51:57.039737 | orchestrator | 2026-01-05 02:51:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:51:57.039877 | orchestrator | 2026-01-05 02:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:00.094008 | orchestrator | 2026-01-05 02:52:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:00.094962 | orchestrator | 2026-01-05 02:52:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:00.094990 | orchestrator | 2026-01-05 02:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:03.143160 | orchestrator | 2026-01-05 02:52:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:03.145979 | orchestrator | 2026-01-05 02:52:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:03.146100 | orchestrator | 2026-01-05 02:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:06.194984 | orchestrator | 2026-01-05 02:52:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:06.198497 | orchestrator | 2026-01-05 02:52:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:06.198620 | orchestrator | 2026-01-05 02:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:09.242285 | orchestrator | 2026-01-05 02:52:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:09.243100 | orchestrator | 2026-01-05 02:52:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:09.243137 | orchestrator | 2026-01-05 02:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:12.279809 | orchestrator | 2026-01-05 02:52:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:12.281207 | orchestrator | 2026-01-05 02:52:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:12.281359 | orchestrator | 2026-01-05 02:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:15.328133 | orchestrator | 2026-01-05 02:52:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:15.329984 | orchestrator | 2026-01-05 02:52:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:15.330104 | orchestrator | 2026-01-05 02:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:18.379664 | orchestrator | 2026-01-05 02:52:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:18.382664 | orchestrator | 2026-01-05 02:52:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:18.382727 | orchestrator | 2026-01-05 02:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:21.435146 | orchestrator | 2026-01-05 02:52:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:21.438503 | orchestrator | 2026-01-05 02:52:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:21.438552 | orchestrator | 2026-01-05 02:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:24.485101 | orchestrator | 2026-01-05 02:52:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:24.486130 | orchestrator | 2026-01-05 02:52:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:24.486191 | orchestrator | 2026-01-05 02:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:27.530104 | orchestrator | 2026-01-05 02:52:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:27.532055 | orchestrator | 2026-01-05 02:52:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:27.532128 | orchestrator | 2026-01-05 02:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:30.569725 | orchestrator | 2026-01-05 02:52:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:30.570763 | orchestrator | 2026-01-05 02:52:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:30.570806 | orchestrator | 2026-01-05 02:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:33.621329 | orchestrator | 2026-01-05 02:52:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:33.623164 | orchestrator | 2026-01-05 02:52:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:33.623231 | orchestrator | 2026-01-05 02:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:36.674150 | orchestrator | 2026-01-05 02:52:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:36.675769 | orchestrator | 2026-01-05 02:52:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:36.675821 | orchestrator | 2026-01-05 02:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:39.726185 | orchestrator | 2026-01-05 02:52:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:39.728260 | orchestrator | 2026-01-05 02:52:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:39.728341 | orchestrator | 2026-01-05 02:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:42.782078 | orchestrator | 2026-01-05 02:52:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:42.784225 | orchestrator | 2026-01-05 02:52:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:42.784290 | orchestrator | 2026-01-05 02:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:45.827540 | orchestrator | 2026-01-05 02:52:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:45.829049 | orchestrator | 2026-01-05 02:52:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:45.829121 | orchestrator | 2026-01-05 02:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:48.882331 | orchestrator | 2026-01-05 02:52:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:48.883973 | orchestrator | 2026-01-05 02:52:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:48.884028 | orchestrator | 2026-01-05 02:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:51.931156 | orchestrator | 2026-01-05 02:52:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:51.933388 | orchestrator | 2026-01-05 02:52:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:51.933479 | orchestrator | 2026-01-05 02:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:54.981892 | orchestrator | 2026-01-05 02:52:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:54.984219 | orchestrator | 2026-01-05 02:52:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:54.984280 | orchestrator | 2026-01-05 02:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:58.027722 | orchestrator | 2026-01-05 02:52:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:52:58.028543 | orchestrator | 2026-01-05 02:52:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:52:58.028722 | orchestrator | 2026-01-05 02:52:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:01.081435 | orchestrator | 2026-01-05 02:53:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:01.083122 | orchestrator | 2026-01-05 02:53:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:01.083159 | orchestrator | 2026-01-05 02:53:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:04.123244 | orchestrator | 2026-01-05 02:53:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:04.124826 | orchestrator | 2026-01-05 02:53:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:04.125327 | orchestrator | 2026-01-05 02:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:07.169718 | orchestrator | 2026-01-05 02:53:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:07.171187 | orchestrator | 2026-01-05 02:53:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:07.171299 | orchestrator | 2026-01-05 02:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:10.221432 | orchestrator | 2026-01-05 02:53:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:10.222791 | orchestrator | 2026-01-05 02:53:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:10.222868 | orchestrator | 2026-01-05 02:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:13.270281 | orchestrator | 2026-01-05 02:53:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:13.272880 | orchestrator | 2026-01-05 02:53:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:13.273007 | orchestrator | 2026-01-05 02:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:16.320862 | orchestrator | 2026-01-05 02:53:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:16.323019 | orchestrator | 2026-01-05 02:53:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:16.323092 | orchestrator | 2026-01-05 02:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:19.370709 | orchestrator | 2026-01-05 02:53:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:19.373150 | orchestrator | 2026-01-05 02:53:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:19.373226 | orchestrator | 2026-01-05 02:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:22.424390 | orchestrator | 2026-01-05 02:53:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:22.426290 | orchestrator | 2026-01-05 02:53:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:22.426353 | orchestrator | 2026-01-05 02:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:25.472375 | orchestrator | 2026-01-05 02:53:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:25.475488 | orchestrator | 2026-01-05 02:53:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:25.475582 | orchestrator | 2026-01-05 02:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:28.519688 | orchestrator | 2026-01-05 02:53:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:28.521779 | orchestrator | 2026-01-05 02:53:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:28.521840 | orchestrator | 2026-01-05 02:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:31.570150 | orchestrator | 2026-01-05 02:53:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:31.572606 | orchestrator | 2026-01-05 02:53:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:31.572836 | orchestrator | 2026-01-05 02:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:34.622557 | orchestrator | 2026-01-05 02:53:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:34.625671 | orchestrator | 2026-01-05 02:53:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:34.625740 | orchestrator | 2026-01-05 02:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:37.677832 | orchestrator | 2026-01-05 02:53:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:37.681320 | orchestrator | 2026-01-05 02:53:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:37.681415 | orchestrator | 2026-01-05 02:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:40.734513 | orchestrator | 2026-01-05 02:53:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:40.736298 | orchestrator | 2026-01-05 02:53:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:40.736367 | orchestrator | 2026-01-05 02:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:43.792756 | orchestrator | 2026-01-05 02:53:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:43.795103 | orchestrator | 2026-01-05 02:53:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:43.795140 | orchestrator | 2026-01-05 02:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:46.844607 | orchestrator | 2026-01-05 02:53:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:46.845132 | orchestrator | 2026-01-05 02:53:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:46.845153 | orchestrator | 2026-01-05 02:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:49.884927 | orchestrator | 2026-01-05 02:53:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:49.885331 | orchestrator | 2026-01-05 02:53:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:49.885355 | orchestrator | 2026-01-05 02:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:52.931108 | orchestrator | 2026-01-05 02:53:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:52.932733 | orchestrator | 2026-01-05 02:53:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:52.932777 | orchestrator | 2026-01-05 02:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:55.981750 | orchestrator | 2026-01-05 02:53:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:55.983422 | orchestrator | 2026-01-05 02:53:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:55.983452 | orchestrator | 2026-01-05 02:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:59.056817 | orchestrator | 2026-01-05 02:53:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:53:59.057884 | orchestrator | 2026-01-05 02:53:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:53:59.057948 | orchestrator | 2026-01-05 02:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:02.102831 | orchestrator | 2026-01-05 02:54:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:02.104173 | orchestrator | 2026-01-05 02:54:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:02.104295 | orchestrator | 2026-01-05 02:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:05.163821 | orchestrator | 2026-01-05 02:54:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:05.166708 | orchestrator | 2026-01-05 02:54:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:05.166762 | orchestrator | 2026-01-05 02:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:08.207979 | orchestrator | 2026-01-05 02:54:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:08.209694 | orchestrator | 2026-01-05 02:54:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:08.209819 | orchestrator | 2026-01-05 02:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:11.255144 | orchestrator | 2026-01-05 02:54:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:11.257447 | orchestrator | 2026-01-05 02:54:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:11.257518 | orchestrator | 2026-01-05 02:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:14.303011 | orchestrator | 2026-01-05 02:54:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:14.304164 | orchestrator | 2026-01-05 02:54:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:14.304214 | orchestrator | 2026-01-05 02:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:17.342253 | orchestrator | 2026-01-05 02:54:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:17.344227 | orchestrator | 2026-01-05 02:54:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:17.344273 | orchestrator | 2026-01-05 02:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:20.385582 | orchestrator | 2026-01-05 02:54:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:20.386874 | orchestrator | 2026-01-05 02:54:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:20.386933 | orchestrator | 2026-01-05 02:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:23.433750 | orchestrator | 2026-01-05 02:54:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:23.436275 | orchestrator | 2026-01-05 02:54:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:23.436331 | orchestrator | 2026-01-05 02:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:26.484719 | orchestrator | 2026-01-05 02:54:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:26.486157 | orchestrator | 2026-01-05 02:54:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:26.486219 | orchestrator | 2026-01-05 02:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:29.535322 | orchestrator | 2026-01-05 02:54:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:29.536982 | orchestrator | 2026-01-05 02:54:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:29.537078 | orchestrator | 2026-01-05 02:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:32.576659 | orchestrator | 2026-01-05 02:54:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:32.577651 | orchestrator | 2026-01-05 02:54:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:32.577727 | orchestrator | 2026-01-05 02:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:35.621628 | orchestrator | 2026-01-05 02:54:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:35.622830 | orchestrator | 2026-01-05 02:54:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:35.622864 | orchestrator | 2026-01-05 02:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:38.665008 | orchestrator | 2026-01-05 02:54:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:38.666171 | orchestrator | 2026-01-05 02:54:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:38.666217 | orchestrator | 2026-01-05 02:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:41.712194 | orchestrator | 2026-01-05 02:54:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:41.714533 | orchestrator | 2026-01-05 02:54:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:41.714732 | orchestrator | 2026-01-05 02:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:44.763355 | orchestrator | 2026-01-05 02:54:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:44.763779 | orchestrator | 2026-01-05 02:54:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:44.763951 | orchestrator | 2026-01-05 02:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:47.813347 | orchestrator | 2026-01-05 02:54:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:47.815270 | orchestrator | 2026-01-05 02:54:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:47.815324 | orchestrator | 2026-01-05 02:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:50.863305 | orchestrator | 2026-01-05 02:54:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:50.864976 | orchestrator | 2026-01-05 02:54:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:50.865406 | orchestrator | 2026-01-05 02:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:53.912870 | orchestrator | 2026-01-05 02:54:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:53.915249 | orchestrator | 2026-01-05 02:54:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:53.915331 | orchestrator | 2026-01-05 02:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:56.968405 | orchestrator | 2026-01-05 02:54:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:54:56.970433 | orchestrator | 2026-01-05 02:54:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:54:56.970507 | orchestrator | 2026-01-05 02:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:00.034320 | orchestrator | 2026-01-05 02:55:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:00.036161 | orchestrator | 2026-01-05 02:55:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:00.036233 | orchestrator | 2026-01-05 02:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:03.085750 | orchestrator | 2026-01-05 02:55:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:03.087270 | orchestrator | 2026-01-05 02:55:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:03.087333 | orchestrator | 2026-01-05 02:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:06.132905 | orchestrator | 2026-01-05 02:55:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:06.134949 | orchestrator | 2026-01-05 02:55:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:06.135010 | orchestrator | 2026-01-05 02:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:09.182898 | orchestrator | 2026-01-05 02:55:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:09.184339 | orchestrator | 2026-01-05 02:55:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:09.184417 | orchestrator | 2026-01-05 02:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:12.244921 | orchestrator | 2026-01-05 02:55:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:12.247913 | orchestrator | 2026-01-05 02:55:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:12.247992 | orchestrator | 2026-01-05 02:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:15.305652 | orchestrator | 2026-01-05 02:55:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:15.307237 | orchestrator | 2026-01-05 02:55:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:15.307294 | orchestrator | 2026-01-05 02:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:18.350228 | orchestrator | 2026-01-05 02:55:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:18.353335 | orchestrator | 2026-01-05 02:55:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:18.353420 | orchestrator | 2026-01-05 02:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:21.404302 | orchestrator | 2026-01-05 02:55:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:21.405881 | orchestrator | 2026-01-05 02:55:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:21.405915 | orchestrator | 2026-01-05 02:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:24.457589 | orchestrator | 2026-01-05 02:55:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:24.460936 | orchestrator | 2026-01-05 02:55:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:24.461044 | orchestrator | 2026-01-05 02:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:27.515961 | orchestrator | 2026-01-05 02:55:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:27.517058 | orchestrator | 2026-01-05 02:55:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:27.517135 | orchestrator | 2026-01-05 02:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:30.567336 | orchestrator | 2026-01-05 02:55:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:30.568004 | orchestrator | 2026-01-05 02:55:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:30.568226 | orchestrator | 2026-01-05 02:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:33.617956 | orchestrator | 2026-01-05 02:55:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:33.622174 | orchestrator | 2026-01-05 02:55:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:33.622265 | orchestrator | 2026-01-05 02:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:36.686125 | orchestrator | 2026-01-05 02:55:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:36.688816 | orchestrator | 2026-01-05 02:55:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:36.688978 | orchestrator | 2026-01-05 02:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:39.739082 | orchestrator | 2026-01-05 02:55:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:39.740906 | orchestrator | 2026-01-05 02:55:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:39.740968 | orchestrator | 2026-01-05 02:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:42.787398 | orchestrator | 2026-01-05 02:55:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:42.788499 | orchestrator | 2026-01-05 02:55:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:42.788560 | orchestrator | 2026-01-05 02:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:45.835360 | orchestrator | 2026-01-05 02:55:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:45.836178 | orchestrator | 2026-01-05 02:55:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:45.836459 | orchestrator | 2026-01-05 02:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:48.884922 | orchestrator | 2026-01-05 02:55:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:48.887090 | orchestrator | 2026-01-05 02:55:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:48.887184 | orchestrator | 2026-01-05 02:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:51.926434 | orchestrator | 2026-01-05 02:55:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:51.927345 | orchestrator | 2026-01-05 02:55:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:51.927388 | orchestrator | 2026-01-05 02:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:54.977381 | orchestrator | 2026-01-05 02:55:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:54.979476 | orchestrator | 2026-01-05 02:55:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:54.979544 | orchestrator | 2026-01-05 02:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:58.032768 | orchestrator | 2026-01-05 02:55:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:55:58.034523 | orchestrator | 2026-01-05 02:55:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:55:58.034567 | orchestrator | 2026-01-05 02:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:01.085387 | orchestrator | 2026-01-05 02:56:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:01.087865 | orchestrator | 2026-01-05 02:56:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:01.087930 | orchestrator | 2026-01-05 02:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:04.128437 | orchestrator | 2026-01-05 02:56:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:04.130965 | orchestrator | 2026-01-05 02:56:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:04.131050 | orchestrator | 2026-01-05 02:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:07.181284 | orchestrator | 2026-01-05 02:56:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:07.183855 | orchestrator | 2026-01-05 02:56:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:07.184010 | orchestrator | 2026-01-05 02:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:10.226324 | orchestrator | 2026-01-05 02:56:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:10.230193 | orchestrator | 2026-01-05 02:56:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:10.230335 | orchestrator | 2026-01-05 02:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:13.276250 | orchestrator | 2026-01-05 02:56:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:13.278095 | orchestrator | 2026-01-05 02:56:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:13.278168 | orchestrator | 2026-01-05 02:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:16.325863 | orchestrator | 2026-01-05 02:56:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:16.328128 | orchestrator | 2026-01-05 02:56:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:16.328192 | orchestrator | 2026-01-05 02:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:19.380665 | orchestrator | 2026-01-05 02:56:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:19.382205 | orchestrator | 2026-01-05 02:56:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:19.382260 | orchestrator | 2026-01-05 02:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:22.429818 | orchestrator | 2026-01-05 02:56:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:22.431212 | orchestrator | 2026-01-05 02:56:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:22.431296 | orchestrator | 2026-01-05 02:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:25.482342 | orchestrator | 2026-01-05 02:56:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:25.484079 | orchestrator | 2026-01-05 02:56:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:25.484154 | orchestrator | 2026-01-05 02:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:28.519169 | orchestrator | 2026-01-05 02:56:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:28.521218 | orchestrator | 2026-01-05 02:56:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:28.521635 | orchestrator | 2026-01-05 02:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:31.571361 | orchestrator | 2026-01-05 02:56:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:31.573045 | orchestrator | 2026-01-05 02:56:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:31.573114 | orchestrator | 2026-01-05 02:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:34.612487 | orchestrator | 2026-01-05 02:56:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:34.614408 | orchestrator | 2026-01-05 02:56:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:34.614518 | orchestrator | 2026-01-05 02:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:37.657237 | orchestrator | 2026-01-05 02:56:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:37.659009 | orchestrator | 2026-01-05 02:56:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:37.659073 | orchestrator | 2026-01-05 02:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:40.695806 | orchestrator | 2026-01-05 02:56:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:40.695969 | orchestrator | 2026-01-05 02:56:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:40.695985 | orchestrator | 2026-01-05 02:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:43.740576 | orchestrator | 2026-01-05 02:56:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:43.743079 | orchestrator | 2026-01-05 02:56:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:43.743139 | orchestrator | 2026-01-05 02:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:46.782251 | orchestrator | 2026-01-05 02:56:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:46.785429 | orchestrator | 2026-01-05 02:56:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:46.785492 | orchestrator | 2026-01-05 02:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:49.836784 | orchestrator | 2026-01-05 02:56:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:49.838174 | orchestrator | 2026-01-05 02:56:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:49.838231 | orchestrator | 2026-01-05 02:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:52.889846 | orchestrator | 2026-01-05 02:56:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:52.890987 | orchestrator | 2026-01-05 02:56:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:52.891026 | orchestrator | 2026-01-05 02:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:55.938824 | orchestrator | 2026-01-05 02:56:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:55.940683 | orchestrator | 2026-01-05 02:56:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:55.940751 | orchestrator | 2026-01-05 02:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:58.988473 | orchestrator | 2026-01-05 02:56:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:56:58.989441 | orchestrator | 2026-01-05 02:56:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:56:58.989541 | orchestrator | 2026-01-05 02:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:02.033393 | orchestrator | 2026-01-05 02:57:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:02.034734 | orchestrator | 2026-01-05 02:57:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:02.034852 | orchestrator | 2026-01-05 02:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:05.083296 | orchestrator | 2026-01-05 02:57:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:05.084964 | orchestrator | 2026-01-05 02:57:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:05.085002 | orchestrator | 2026-01-05 02:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:08.134856 | orchestrator | 2026-01-05 02:57:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:08.135336 | orchestrator | 2026-01-05 02:57:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:08.135369 | orchestrator | 2026-01-05 02:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:11.186564 | orchestrator | 2026-01-05 02:57:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:11.188432 | orchestrator | 2026-01-05 02:57:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:11.188493 | orchestrator | 2026-01-05 02:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:14.231280 | orchestrator | 2026-01-05 02:57:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:14.233801 | orchestrator | 2026-01-05 02:57:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:14.234203 | orchestrator | 2026-01-05 02:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:17.280101 | orchestrator | 2026-01-05 02:57:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:17.281921 | orchestrator | 2026-01-05 02:57:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:17.281994 | orchestrator | 2026-01-05 02:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:20.324441 | orchestrator | 2026-01-05 02:57:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:20.326253 | orchestrator | 2026-01-05 02:57:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:20.326332 | orchestrator | 2026-01-05 02:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:23.373211 | orchestrator | 2026-01-05 02:57:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:23.376087 | orchestrator | 2026-01-05 02:57:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:23.376141 | orchestrator | 2026-01-05 02:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:26.427321 | orchestrator | 2026-01-05 02:57:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:26.428636 | orchestrator | 2026-01-05 02:57:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:26.428680 | orchestrator | 2026-01-05 02:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:29.488845 | orchestrator | 2026-01-05 02:57:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:29.491275 | orchestrator | 2026-01-05 02:57:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:29.491348 | orchestrator | 2026-01-05 02:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:32.539639 | orchestrator | 2026-01-05 02:57:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:32.541250 | orchestrator | 2026-01-05 02:57:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:32.541288 | orchestrator | 2026-01-05 02:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:35.591774 | orchestrator | 2026-01-05 02:57:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:35.594678 | orchestrator | 2026-01-05 02:57:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:35.594762 | orchestrator | 2026-01-05 02:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:38.641661 | orchestrator | 2026-01-05 02:57:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:38.645578 | orchestrator | 2026-01-05 02:57:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:38.645647 | orchestrator | 2026-01-05 02:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:41.693539 | orchestrator | 2026-01-05 02:57:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:41.697900 | orchestrator | 2026-01-05 02:57:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:41.698067 | orchestrator | 2026-01-05 02:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:44.753832 | orchestrator | 2026-01-05 02:57:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:44.755898 | orchestrator | 2026-01-05 02:57:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:44.755981 | orchestrator | 2026-01-05 02:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:47.814194 | orchestrator | 2026-01-05 02:57:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:47.816948 | orchestrator | 2026-01-05 02:57:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:47.816990 | orchestrator | 2026-01-05 02:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:50.869099 | orchestrator | 2026-01-05 02:57:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:50.871230 | orchestrator | 2026-01-05 02:57:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:50.871337 | orchestrator | 2026-01-05 02:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:53.922005 | orchestrator | 2026-01-05 02:57:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:53.924169 | orchestrator | 2026-01-05 02:57:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:53.924233 | orchestrator | 2026-01-05 02:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:56.972369 | orchestrator | 2026-01-05 02:57:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:57:56.974343 | orchestrator | 2026-01-05 02:57:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:57:56.974410 | orchestrator | 2026-01-05 02:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:00.028353 | orchestrator | 2026-01-05 02:58:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:00.030207 | orchestrator | 2026-01-05 02:58:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:00.030266 | orchestrator | 2026-01-05 02:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:03.079348 | orchestrator | 2026-01-05 02:58:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:03.080255 | orchestrator | 2026-01-05 02:58:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:03.080340 | orchestrator | 2026-01-05 02:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:06.133738 | orchestrator | 2026-01-05 02:58:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:06.135441 | orchestrator | 2026-01-05 02:58:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:06.135503 | orchestrator | 2026-01-05 02:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:09.184954 | orchestrator | 2026-01-05 02:58:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:09.186157 | orchestrator | 2026-01-05 02:58:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:09.186200 | orchestrator | 2026-01-05 02:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:12.238122 | orchestrator | 2026-01-05 02:58:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:12.240228 | orchestrator | 2026-01-05 02:58:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:12.240269 | orchestrator | 2026-01-05 02:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:15.286589 | orchestrator | 2026-01-05 02:58:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:15.288446 | orchestrator | 2026-01-05 02:58:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:15.288511 | orchestrator | 2026-01-05 02:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:18.332144 | orchestrator | 2026-01-05 02:58:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:18.335844 | orchestrator | 2026-01-05 02:58:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:18.335949 | orchestrator | 2026-01-05 02:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:21.384598 | orchestrator | 2026-01-05 02:58:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:21.387063 | orchestrator | 2026-01-05 02:58:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:21.387139 | orchestrator | 2026-01-05 02:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:24.435715 | orchestrator | 2026-01-05 02:58:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:24.437022 | orchestrator | 2026-01-05 02:58:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:24.437074 | orchestrator | 2026-01-05 02:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:27.488463 | orchestrator | 2026-01-05 02:58:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:27.491839 | orchestrator | 2026-01-05 02:58:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:27.492167 | orchestrator | 2026-01-05 02:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:30.534167 | orchestrator | 2026-01-05 02:58:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:30.535445 | orchestrator | 2026-01-05 02:58:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:30.535525 | orchestrator | 2026-01-05 02:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:33.577738 | orchestrator | 2026-01-05 02:58:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:33.579800 | orchestrator | 2026-01-05 02:58:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:33.579872 | orchestrator | 2026-01-05 02:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:36.629069 | orchestrator | 2026-01-05 02:58:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:36.630480 | orchestrator | 2026-01-05 02:58:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:36.630523 | orchestrator | 2026-01-05 02:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:39.684766 | orchestrator | 2026-01-05 02:58:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:39.685390 | orchestrator | 2026-01-05 02:58:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:39.685429 | orchestrator | 2026-01-05 02:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:42.733104 | orchestrator | 2026-01-05 02:58:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:42.734197 | orchestrator | 2026-01-05 02:58:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:42.734249 | orchestrator | 2026-01-05 02:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:45.779266 | orchestrator | 2026-01-05 02:58:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:45.780503 | orchestrator | 2026-01-05 02:58:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:45.780577 | orchestrator | 2026-01-05 02:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:48.829653 | orchestrator | 2026-01-05 02:58:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:48.830986 | orchestrator | 2026-01-05 02:58:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:48.831030 | orchestrator | 2026-01-05 02:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:51.874414 | orchestrator | 2026-01-05 02:58:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:51.875054 | orchestrator | 2026-01-05 02:58:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:51.875289 | orchestrator | 2026-01-05 02:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:54.927243 | orchestrator | 2026-01-05 02:58:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:54.928900 | orchestrator | 2026-01-05 02:58:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:54.928969 | orchestrator | 2026-01-05 02:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:57.976780 | orchestrator | 2026-01-05 02:58:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:58:57.979665 | orchestrator | 2026-01-05 02:58:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:58:57.979953 | orchestrator | 2026-01-05 02:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:01.026777 | orchestrator | 2026-01-05 02:59:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:01.030216 | orchestrator | 2026-01-05 02:59:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:01.030281 | orchestrator | 2026-01-05 02:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:04.075443 | orchestrator | 2026-01-05 02:59:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:04.077103 | orchestrator | 2026-01-05 02:59:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:04.077160 | orchestrator | 2026-01-05 02:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:07.124275 | orchestrator | 2026-01-05 02:59:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:07.125647 | orchestrator | 2026-01-05 02:59:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:07.125693 | orchestrator | 2026-01-05 02:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:10.162436 | orchestrator | 2026-01-05 02:59:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:10.164987 | orchestrator | 2026-01-05 02:59:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:10.165042 | orchestrator | 2026-01-05 02:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:13.215030 | orchestrator | 2026-01-05 02:59:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:13.215277 | orchestrator | 2026-01-05 02:59:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:13.215297 | orchestrator | 2026-01-05 02:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:16.271908 | orchestrator | 2026-01-05 02:59:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:16.273235 | orchestrator | 2026-01-05 02:59:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:16.273293 | orchestrator | 2026-01-05 02:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:19.321324 | orchestrator | 2026-01-05 02:59:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:19.325172 | orchestrator | 2026-01-05 02:59:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:19.325291 | orchestrator | 2026-01-05 02:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:22.372908 | orchestrator | 2026-01-05 02:59:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:22.374766 | orchestrator | 2026-01-05 02:59:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:22.375113 | orchestrator | 2026-01-05 02:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:25.424352 | orchestrator | 2026-01-05 02:59:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:25.425404 | orchestrator | 2026-01-05 02:59:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:25.425501 | orchestrator | 2026-01-05 02:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:28.475522 | orchestrator | 2026-01-05 02:59:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:28.476640 | orchestrator | 2026-01-05 02:59:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:28.476673 | orchestrator | 2026-01-05 02:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:31.522120 | orchestrator | 2026-01-05 02:59:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:31.523114 | orchestrator | 2026-01-05 02:59:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:31.523151 | orchestrator | 2026-01-05 02:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:34.570868 | orchestrator | 2026-01-05 02:59:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:34.573185 | orchestrator | 2026-01-05 02:59:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:34.573262 | orchestrator | 2026-01-05 02:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:37.623613 | orchestrator | 2026-01-05 02:59:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:37.625135 | orchestrator | 2026-01-05 02:59:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:37.625168 | orchestrator | 2026-01-05 02:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:40.668646 | orchestrator | 2026-01-05 02:59:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:40.669121 | orchestrator | 2026-01-05 02:59:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:40.669236 | orchestrator | 2026-01-05 02:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:43.722327 | orchestrator | 2026-01-05 02:59:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:43.724557 | orchestrator | 2026-01-05 02:59:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:43.724689 | orchestrator | 2026-01-05 02:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:46.775888 | orchestrator | 2026-01-05 02:59:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:46.778962 | orchestrator | 2026-01-05 02:59:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:46.779070 | orchestrator | 2026-01-05 02:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:49.824280 | orchestrator | 2026-01-05 02:59:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:49.826335 | orchestrator | 2026-01-05 02:59:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:49.826600 | orchestrator | 2026-01-05 02:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:52.873388 | orchestrator | 2026-01-05 02:59:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:52.874473 | orchestrator | 2026-01-05 02:59:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:52.874518 | orchestrator | 2026-01-05 02:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:55.926568 | orchestrator | 2026-01-05 02:59:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:55.926709 | orchestrator | 2026-01-05 02:59:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:55.926720 | orchestrator | 2026-01-05 02:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:58.975608 | orchestrator | 2026-01-05 02:59:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 02:59:58.977557 | orchestrator | 2026-01-05 02:59:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 02:59:58.977636 | orchestrator | 2026-01-05 02:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:02.031494 | orchestrator | 2026-01-05 03:00:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:02.033437 | orchestrator | 2026-01-05 03:00:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:02.033808 | orchestrator | 2026-01-05 03:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:05.082002 | orchestrator | 2026-01-05 03:00:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:05.082487 | orchestrator | 2026-01-05 03:00:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:05.082594 | orchestrator | 2026-01-05 03:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:08.129624 | orchestrator | 2026-01-05 03:00:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:08.131760 | orchestrator | 2026-01-05 03:00:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:08.131861 | orchestrator | 2026-01-05 03:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:11.177118 | orchestrator | 2026-01-05 03:00:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:11.178138 | orchestrator | 2026-01-05 03:00:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:11.178180 | orchestrator | 2026-01-05 03:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:14.226068 | orchestrator | 2026-01-05 03:00:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:14.228104 | orchestrator | 2026-01-05 03:00:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:14.228158 | orchestrator | 2026-01-05 03:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:17.278398 | orchestrator | 2026-01-05 03:00:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:17.280358 | orchestrator | 2026-01-05 03:00:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:17.327196 | orchestrator | 2026-01-05 03:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:20.322716 | orchestrator | 2026-01-05 03:00:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:20.322899 | orchestrator | 2026-01-05 03:00:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:20.322912 | orchestrator | 2026-01-05 03:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:23.369256 | orchestrator | 2026-01-05 03:00:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:23.369441 | orchestrator | 2026-01-05 03:00:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:23.369458 | orchestrator | 2026-01-05 03:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:26.410238 | orchestrator | 2026-01-05 03:00:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:26.410353 | orchestrator | 2026-01-05 03:00:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:26.410362 | orchestrator | 2026-01-05 03:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:29.451674 | orchestrator | 2026-01-05 03:00:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:29.452624 | orchestrator | 2026-01-05 03:00:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:29.452718 | orchestrator | 2026-01-05 03:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:32.506640 | orchestrator | 2026-01-05 03:00:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:32.507613 | orchestrator | 2026-01-05 03:00:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:32.507771 | orchestrator | 2026-01-05 03:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:35.557969 | orchestrator | 2026-01-05 03:00:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:35.559801 | orchestrator | 2026-01-05 03:00:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:35.559869 | orchestrator | 2026-01-05 03:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:38.608403 | orchestrator | 2026-01-05 03:00:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:38.609607 | orchestrator | 2026-01-05 03:00:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:38.609770 | orchestrator | 2026-01-05 03:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:41.657950 | orchestrator | 2026-01-05 03:00:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:41.658642 | orchestrator | 2026-01-05 03:00:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:41.658692 | orchestrator | 2026-01-05 03:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:44.713943 | orchestrator | 2026-01-05 03:00:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:44.716027 | orchestrator | 2026-01-05 03:00:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:44.716108 | orchestrator | 2026-01-05 03:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:47.760447 | orchestrator | 2026-01-05 03:00:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:47.761405 | orchestrator | 2026-01-05 03:00:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:47.761468 | orchestrator | 2026-01-05 03:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:50.813949 | orchestrator | 2026-01-05 03:00:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:50.814178 | orchestrator | 2026-01-05 03:00:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:50.814192 | orchestrator | 2026-01-05 03:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:53.857979 | orchestrator | 2026-01-05 03:00:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:53.859428 | orchestrator | 2026-01-05 03:00:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:53.859477 | orchestrator | 2026-01-05 03:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:56.904203 | orchestrator | 2026-01-05 03:00:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:56.906398 | orchestrator | 2026-01-05 03:00:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:56.906476 | orchestrator | 2026-01-05 03:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:59.962969 | orchestrator | 2026-01-05 03:00:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:00:59.963644 | orchestrator | 2026-01-05 03:00:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:00:59.963734 | orchestrator | 2026-01-05 03:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:03.019034 | orchestrator | 2026-01-05 03:01:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:03.020496 | orchestrator | 2026-01-05 03:01:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:03.020633 | orchestrator | 2026-01-05 03:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:06.074871 | orchestrator | 2026-01-05 03:01:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:06.076362 | orchestrator | 2026-01-05 03:01:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:06.076401 | orchestrator | 2026-01-05 03:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:09.124916 | orchestrator | 2026-01-05 03:01:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:09.126854 | orchestrator | 2026-01-05 03:01:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:09.126907 | orchestrator | 2026-01-05 03:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:12.174276 | orchestrator | 2026-01-05 03:01:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:12.176593 | orchestrator | 2026-01-05 03:01:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:12.176643 | orchestrator | 2026-01-05 03:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:15.224956 | orchestrator | 2026-01-05 03:01:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:15.225251 | orchestrator | 2026-01-05 03:01:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:15.225272 | orchestrator | 2026-01-05 03:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:18.274952 | orchestrator | 2026-01-05 03:01:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:18.276342 | orchestrator | 2026-01-05 03:01:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:18.276415 | orchestrator | 2026-01-05 03:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:21.324257 | orchestrator | 2026-01-05 03:01:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:21.325381 | orchestrator | 2026-01-05 03:01:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:21.325425 | orchestrator | 2026-01-05 03:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:24.371502 | orchestrator | 2026-01-05 03:01:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:24.373107 | orchestrator | 2026-01-05 03:01:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:24.373173 | orchestrator | 2026-01-05 03:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:27.418934 | orchestrator | 2026-01-05 03:01:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:27.420545 | orchestrator | 2026-01-05 03:01:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:27.420619 | orchestrator | 2026-01-05 03:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:30.464279 | orchestrator | 2026-01-05 03:01:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:30.464655 | orchestrator | 2026-01-05 03:01:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:30.464690 | orchestrator | 2026-01-05 03:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:33.513023 | orchestrator | 2026-01-05 03:01:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:33.513548 | orchestrator | 2026-01-05 03:01:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:33.513614 | orchestrator | 2026-01-05 03:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:36.561028 | orchestrator | 2026-01-05 03:01:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:36.563118 | orchestrator | 2026-01-05 03:01:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:36.563189 | orchestrator | 2026-01-05 03:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:39.616696 | orchestrator | 2026-01-05 03:01:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:39.618281 | orchestrator | 2026-01-05 03:01:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:39.618318 | orchestrator | 2026-01-05 03:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:42.667444 | orchestrator | 2026-01-05 03:01:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:42.668128 | orchestrator | 2026-01-05 03:01:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:42.668153 | orchestrator | 2026-01-05 03:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:45.719677 | orchestrator | 2026-01-05 03:01:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:45.722137 | orchestrator | 2026-01-05 03:01:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:45.722205 | orchestrator | 2026-01-05 03:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:48.769548 | orchestrator | 2026-01-05 03:01:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:48.770717 | orchestrator | 2026-01-05 03:01:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:48.770759 | orchestrator | 2026-01-05 03:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:51.817444 | orchestrator | 2026-01-05 03:01:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:51.819138 | orchestrator | 2026-01-05 03:01:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:51.819253 | orchestrator | 2026-01-05 03:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:54.861673 | orchestrator | 2026-01-05 03:01:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:54.863476 | orchestrator | 2026-01-05 03:01:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:54.863529 | orchestrator | 2026-01-05 03:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:57.910371 | orchestrator | 2026-01-05 03:01:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:01:57.913091 | orchestrator | 2026-01-05 03:01:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:01:57.913179 | orchestrator | 2026-01-05 03:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:00.965588 | orchestrator | 2026-01-05 03:02:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:00.967035 | orchestrator | 2026-01-05 03:02:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:00.967069 | orchestrator | 2026-01-05 03:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:04.014184 | orchestrator | 2026-01-05 03:02:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:04.016071 | orchestrator | 2026-01-05 03:02:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:04.016271 | orchestrator | 2026-01-05 03:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:07.061426 | orchestrator | 2026-01-05 03:02:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:07.064021 | orchestrator | 2026-01-05 03:02:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:07.064170 | orchestrator | 2026-01-05 03:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:10.109066 | orchestrator | 2026-01-05 03:02:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:10.110866 | orchestrator | 2026-01-05 03:02:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:10.110917 | orchestrator | 2026-01-05 03:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:13.166630 | orchestrator | 2026-01-05 03:02:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:13.167513 | orchestrator | 2026-01-05 03:02:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:13.167534 | orchestrator | 2026-01-05 03:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:16.223398 | orchestrator | 2026-01-05 03:02:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:16.227066 | orchestrator | 2026-01-05 03:02:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:16.227148 | orchestrator | 2026-01-05 03:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:19.268481 | orchestrator | 2026-01-05 03:02:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:19.269750 | orchestrator | 2026-01-05 03:02:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:19.269804 | orchestrator | 2026-01-05 03:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:22.313325 | orchestrator | 2026-01-05 03:02:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:22.313437 | orchestrator | 2026-01-05 03:02:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:22.313453 | orchestrator | 2026-01-05 03:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:25.359480 | orchestrator | 2026-01-05 03:02:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:25.360087 | orchestrator | 2026-01-05 03:02:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:25.360130 | orchestrator | 2026-01-05 03:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:28.406455 | orchestrator | 2026-01-05 03:02:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:28.628234 | orchestrator | 2026-01-05 03:02:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:28.628297 | orchestrator | 2026-01-05 03:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:31.465544 | orchestrator | 2026-01-05 03:02:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:31.467504 | orchestrator | 2026-01-05 03:02:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:31.467569 | orchestrator | 2026-01-05 03:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:34.516040 | orchestrator | 2026-01-05 03:02:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:34.517526 | orchestrator | 2026-01-05 03:02:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:34.517573 | orchestrator | 2026-01-05 03:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:37.566132 | orchestrator | 2026-01-05 03:02:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:37.566997 | orchestrator | 2026-01-05 03:02:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:37.567376 | orchestrator | 2026-01-05 03:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:40.611820 | orchestrator | 2026-01-05 03:02:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:40.612718 | orchestrator | 2026-01-05 03:02:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:40.612920 | orchestrator | 2026-01-05 03:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:43.656035 | orchestrator | 2026-01-05 03:02:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:43.657448 | orchestrator | 2026-01-05 03:02:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:43.657504 | orchestrator | 2026-01-05 03:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:46.705927 | orchestrator | 2026-01-05 03:02:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:46.707797 | orchestrator | 2026-01-05 03:02:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:46.708022 | orchestrator | 2026-01-05 03:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:49.754705 | orchestrator | 2026-01-05 03:02:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:49.756438 | orchestrator | 2026-01-05 03:02:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:49.756496 | orchestrator | 2026-01-05 03:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:52.797030 | orchestrator | 2026-01-05 03:02:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:52.798132 | orchestrator | 2026-01-05 03:02:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:52.798225 | orchestrator | 2026-01-05 03:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:55.849834 | orchestrator | 2026-01-05 03:02:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:55.851122 | orchestrator | 2026-01-05 03:02:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:55.851265 | orchestrator | 2026-01-05 03:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:58.896091 | orchestrator | 2026-01-05 03:02:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:02:58.896610 | orchestrator | 2026-01-05 03:02:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:02:58.896635 | orchestrator | 2026-01-05 03:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:01.942965 | orchestrator | 2026-01-05 03:03:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:01.944912 | orchestrator | 2026-01-05 03:03:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:01.944977 | orchestrator | 2026-01-05 03:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:04.987768 | orchestrator | 2026-01-05 03:03:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:04.989472 | orchestrator | 2026-01-05 03:03:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:04.989685 | orchestrator | 2026-01-05 03:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:08.039735 | orchestrator | 2026-01-05 03:03:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:08.039894 | orchestrator | 2026-01-05 03:03:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:08.039983 | orchestrator | 2026-01-05 03:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:11.080789 | orchestrator | 2026-01-05 03:03:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:11.082530 | orchestrator | 2026-01-05 03:03:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:11.082649 | orchestrator | 2026-01-05 03:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:14.125390 | orchestrator | 2026-01-05 03:03:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:14.128672 | orchestrator | 2026-01-05 03:03:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:14.128773 | orchestrator | 2026-01-05 03:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:17.175914 | orchestrator | 2026-01-05 03:03:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:17.177541 | orchestrator | 2026-01-05 03:03:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:17.177620 | orchestrator | 2026-01-05 03:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:20.228631 | orchestrator | 2026-01-05 03:03:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:20.229714 | orchestrator | 2026-01-05 03:03:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:20.229815 | orchestrator | 2026-01-05 03:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:23.274726 | orchestrator | 2026-01-05 03:03:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:23.275387 | orchestrator | 2026-01-05 03:03:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:23.275436 | orchestrator | 2026-01-05 03:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:26.329100 | orchestrator | 2026-01-05 03:03:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:26.332404 | orchestrator | 2026-01-05 03:03:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:26.332490 | orchestrator | 2026-01-05 03:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:29.385399 | orchestrator | 2026-01-05 03:03:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:29.387895 | orchestrator | 2026-01-05 03:03:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:29.387979 | orchestrator | 2026-01-05 03:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:32.438215 | orchestrator | 2026-01-05 03:03:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:32.438854 | orchestrator | 2026-01-05 03:03:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:32.438883 | orchestrator | 2026-01-05 03:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:35.491203 | orchestrator | 2026-01-05 03:03:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:35.493915 | orchestrator | 2026-01-05 03:03:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:35.494011 | orchestrator | 2026-01-05 03:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:38.544164 | orchestrator | 2026-01-05 03:03:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:38.547634 | orchestrator | 2026-01-05 03:03:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:38.547702 | orchestrator | 2026-01-05 03:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:41.597839 | orchestrator | 2026-01-05 03:03:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:41.599358 | orchestrator | 2026-01-05 03:03:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:41.599509 | orchestrator | 2026-01-05 03:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:44.643315 | orchestrator | 2026-01-05 03:03:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:44.643808 | orchestrator | 2026-01-05 03:03:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:44.643853 | orchestrator | 2026-01-05 03:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:47.694403 | orchestrator | 2026-01-05 03:03:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:47.696945 | orchestrator | 2026-01-05 03:03:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:47.697037 | orchestrator | 2026-01-05 03:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:50.747768 | orchestrator | 2026-01-05 03:03:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:50.748766 | orchestrator | 2026-01-05 03:03:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:50.748860 | orchestrator | 2026-01-05 03:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:53.798149 | orchestrator | 2026-01-05 03:03:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:53.799734 | orchestrator | 2026-01-05 03:03:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:53.799803 | orchestrator | 2026-01-05 03:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:56.846728 | orchestrator | 2026-01-05 03:03:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:56.848198 | orchestrator | 2026-01-05 03:03:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:56.848240 | orchestrator | 2026-01-05 03:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:59.898298 | orchestrator | 2026-01-05 03:03:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:03:59.901438 | orchestrator | 2026-01-05 03:03:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:03:59.901505 | orchestrator | 2026-01-05 03:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:02.959660 | orchestrator | 2026-01-05 03:04:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:02.960976 | orchestrator | 2026-01-05 03:04:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:02.961003 | orchestrator | 2026-01-05 03:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:06.008585 | orchestrator | 2026-01-05 03:04:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:06.012013 | orchestrator | 2026-01-05 03:04:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:06.012210 | orchestrator | 2026-01-05 03:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:09.069545 | orchestrator | 2026-01-05 03:04:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:09.071939 | orchestrator | 2026-01-05 03:04:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:09.072152 | orchestrator | 2026-01-05 03:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:12.118109 | orchestrator | 2026-01-05 03:04:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:12.120684 | orchestrator | 2026-01-05 03:04:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:12.120759 | orchestrator | 2026-01-05 03:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:15.172517 | orchestrator | 2026-01-05 03:04:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:15.173590 | orchestrator | 2026-01-05 03:04:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:15.173775 | orchestrator | 2026-01-05 03:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:18.228538 | orchestrator | 2026-01-05 03:04:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:18.229771 | orchestrator | 2026-01-05 03:04:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:18.229831 | orchestrator | 2026-01-05 03:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:21.285507 | orchestrator | 2026-01-05 03:04:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:21.287917 | orchestrator | 2026-01-05 03:04:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:21.288046 | orchestrator | 2026-01-05 03:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:24.332605 | orchestrator | 2026-01-05 03:04:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:24.333557 | orchestrator | 2026-01-05 03:04:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:24.333593 | orchestrator | 2026-01-05 03:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:27.383858 | orchestrator | 2026-01-05 03:04:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:27.385312 | orchestrator | 2026-01-05 03:04:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:27.385353 | orchestrator | 2026-01-05 03:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:30.439460 | orchestrator | 2026-01-05 03:04:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:30.440535 | orchestrator | 2026-01-05 03:04:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:30.440637 | orchestrator | 2026-01-05 03:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:33.489807 | orchestrator | 2026-01-05 03:04:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:33.490133 | orchestrator | 2026-01-05 03:04:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:33.490158 | orchestrator | 2026-01-05 03:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:36.533082 | orchestrator | 2026-01-05 03:04:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:36.534783 | orchestrator | 2026-01-05 03:04:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:36.535410 | orchestrator | 2026-01-05 03:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:39.582179 | orchestrator | 2026-01-05 03:04:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:39.584864 | orchestrator | 2026-01-05 03:04:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:39.584940 | orchestrator | 2026-01-05 03:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:42.631466 | orchestrator | 2026-01-05 03:04:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:42.633441 | orchestrator | 2026-01-05 03:04:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:42.633492 | orchestrator | 2026-01-05 03:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:45.681024 | orchestrator | 2026-01-05 03:04:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:45.683184 | orchestrator | 2026-01-05 03:04:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:45.683249 | orchestrator | 2026-01-05 03:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:48.737342 | orchestrator | 2026-01-05 03:04:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:48.739063 | orchestrator | 2026-01-05 03:04:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:48.739180 | orchestrator | 2026-01-05 03:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:51.791009 | orchestrator | 2026-01-05 03:04:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:51.794121 | orchestrator | 2026-01-05 03:04:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:51.794195 | orchestrator | 2026-01-05 03:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:54.839642 | orchestrator | 2026-01-05 03:04:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:54.841570 | orchestrator | 2026-01-05 03:04:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:54.841709 | orchestrator | 2026-01-05 03:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:57.886272 | orchestrator | 2026-01-05 03:04:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:04:57.886374 | orchestrator | 2026-01-05 03:04:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:04:57.886384 | orchestrator | 2026-01-05 03:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:00.928985 | orchestrator | 2026-01-05 03:05:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:00.931923 | orchestrator | 2026-01-05 03:05:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:00.931995 | orchestrator | 2026-01-05 03:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:03.987673 | orchestrator | 2026-01-05 03:05:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:03.990467 | orchestrator | 2026-01-05 03:05:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:03.990543 | orchestrator | 2026-01-05 03:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:07.041445 | orchestrator | 2026-01-05 03:05:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:07.042746 | orchestrator | 2026-01-05 03:05:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:07.042819 | orchestrator | 2026-01-05 03:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:10.097079 | orchestrator | 2026-01-05 03:05:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:10.098106 | orchestrator | 2026-01-05 03:05:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:10.098158 | orchestrator | 2026-01-05 03:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:13.142301 | orchestrator | 2026-01-05 03:05:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:13.143035 | orchestrator | 2026-01-05 03:05:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:13.143125 | orchestrator | 2026-01-05 03:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:16.189707 | orchestrator | 2026-01-05 03:05:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:16.191331 | orchestrator | 2026-01-05 03:05:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:16.191411 | orchestrator | 2026-01-05 03:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:19.249141 | orchestrator | 2026-01-05 03:05:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:19.250792 | orchestrator | 2026-01-05 03:05:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:19.250858 | orchestrator | 2026-01-05 03:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:22.298076 | orchestrator | 2026-01-05 03:05:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:22.300452 | orchestrator | 2026-01-05 03:05:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:22.300510 | orchestrator | 2026-01-05 03:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:25.352858 | orchestrator | 2026-01-05 03:05:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:25.355030 | orchestrator | 2026-01-05 03:05:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:25.355092 | orchestrator | 2026-01-05 03:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:28.408497 | orchestrator | 2026-01-05 03:05:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:28.409057 | orchestrator | 2026-01-05 03:05:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:28.409104 | orchestrator | 2026-01-05 03:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:31.456685 | orchestrator | 2026-01-05 03:05:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:31.459385 | orchestrator | 2026-01-05 03:05:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:31.459495 | orchestrator | 2026-01-05 03:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:34.516341 | orchestrator | 2026-01-05 03:05:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:34.517908 | orchestrator | 2026-01-05 03:05:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:34.517968 | orchestrator | 2026-01-05 03:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:37.563398 | orchestrator | 2026-01-05 03:05:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:37.563479 | orchestrator | 2026-01-05 03:05:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:37.563486 | orchestrator | 2026-01-05 03:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:40.606458 | orchestrator | 2026-01-05 03:05:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:40.607973 | orchestrator | 2026-01-05 03:05:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:40.608063 | orchestrator | 2026-01-05 03:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:43.655349 | orchestrator | 2026-01-05 03:05:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:43.655426 | orchestrator | 2026-01-05 03:05:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:43.655455 | orchestrator | 2026-01-05 03:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:46.705316 | orchestrator | 2026-01-05 03:05:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:46.707769 | orchestrator | 2026-01-05 03:05:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:46.707922 | orchestrator | 2026-01-05 03:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:49.763099 | orchestrator | 2026-01-05 03:05:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:49.764988 | orchestrator | 2026-01-05 03:05:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:49.765055 | orchestrator | 2026-01-05 03:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:52.817148 | orchestrator | 2026-01-05 03:05:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:52.818902 | orchestrator | 2026-01-05 03:05:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:52.818959 | orchestrator | 2026-01-05 03:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:55.873350 | orchestrator | 2026-01-05 03:05:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:55.875036 | orchestrator | 2026-01-05 03:05:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:55.875083 | orchestrator | 2026-01-05 03:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:58.927905 | orchestrator | 2026-01-05 03:05:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:05:58.928021 | orchestrator | 2026-01-05 03:05:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:05:58.928031 | orchestrator | 2026-01-05 03:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:01.976909 | orchestrator | 2026-01-05 03:06:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:01.978989 | orchestrator | 2026-01-05 03:06:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:01.979098 | orchestrator | 2026-01-05 03:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:05.033797 | orchestrator | 2026-01-05 03:06:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:05.036546 | orchestrator | 2026-01-05 03:06:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:05.036607 | orchestrator | 2026-01-05 03:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:08.087227 | orchestrator | 2026-01-05 03:06:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:08.087363 | orchestrator | 2026-01-05 03:06:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:08.087385 | orchestrator | 2026-01-05 03:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:11.135023 | orchestrator | 2026-01-05 03:06:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:11.136260 | orchestrator | 2026-01-05 03:06:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:11.136301 | orchestrator | 2026-01-05 03:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:14.184773 | orchestrator | 2026-01-05 03:06:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:14.186558 | orchestrator | 2026-01-05 03:06:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:14.186627 | orchestrator | 2026-01-05 03:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:17.231202 | orchestrator | 2026-01-05 03:06:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:17.231669 | orchestrator | 2026-01-05 03:06:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:17.231698 | orchestrator | 2026-01-05 03:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:20.282465 | orchestrator | 2026-01-05 03:06:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:20.283142 | orchestrator | 2026-01-05 03:06:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:20.283195 | orchestrator | 2026-01-05 03:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:23.336599 | orchestrator | 2026-01-05 03:06:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:23.337245 | orchestrator | 2026-01-05 03:06:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:23.337288 | orchestrator | 2026-01-05 03:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:26.382747 | orchestrator | 2026-01-05 03:06:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:26.384787 | orchestrator | 2026-01-05 03:06:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:26.384848 | orchestrator | 2026-01-05 03:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:29.427019 | orchestrator | 2026-01-05 03:06:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:29.428158 | orchestrator | 2026-01-05 03:06:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:29.428181 | orchestrator | 2026-01-05 03:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:32.473244 | orchestrator | 2026-01-05 03:06:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:32.476861 | orchestrator | 2026-01-05 03:06:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:32.476934 | orchestrator | 2026-01-05 03:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:35.519949 | orchestrator | 2026-01-05 03:06:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:35.521854 | orchestrator | 2026-01-05 03:06:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:35.521908 | orchestrator | 2026-01-05 03:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:38.576285 | orchestrator | 2026-01-05 03:06:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:38.577498 | orchestrator | 2026-01-05 03:06:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:38.577526 | orchestrator | 2026-01-05 03:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:41.635048 | orchestrator | 2026-01-05 03:06:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:41.638559 | orchestrator | 2026-01-05 03:06:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:41.638837 | orchestrator | 2026-01-05 03:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:44.689471 | orchestrator | 2026-01-05 03:06:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:44.691822 | orchestrator | 2026-01-05 03:06:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:44.691934 | orchestrator | 2026-01-05 03:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:47.725824 | orchestrator | 2026-01-05 03:06:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:47.727803 | orchestrator | 2026-01-05 03:06:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:47.727852 | orchestrator | 2026-01-05 03:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:50.774849 | orchestrator | 2026-01-05 03:06:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:50.775946 | orchestrator | 2026-01-05 03:06:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:50.775997 | orchestrator | 2026-01-05 03:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:53.828741 | orchestrator | 2026-01-05 03:06:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:53.830433 | orchestrator | 2026-01-05 03:06:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:53.830521 | orchestrator | 2026-01-05 03:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:56.878151 | orchestrator | 2026-01-05 03:06:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:56.881606 | orchestrator | 2026-01-05 03:06:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:56.881648 | orchestrator | 2026-01-05 03:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:59.930435 | orchestrator | 2026-01-05 03:06:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:06:59.931830 | orchestrator | 2026-01-05 03:06:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:06:59.931878 | orchestrator | 2026-01-05 03:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:02.984336 | orchestrator | 2026-01-05 03:07:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:02.986173 | orchestrator | 2026-01-05 03:07:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:02.986233 | orchestrator | 2026-01-05 03:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:06.037493 | orchestrator | 2026-01-05 03:07:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:06.042281 | orchestrator | 2026-01-05 03:07:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:06.042514 | orchestrator | 2026-01-05 03:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:09.086961 | orchestrator | 2026-01-05 03:07:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:09.088607 | orchestrator | 2026-01-05 03:07:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:09.089284 | orchestrator | 2026-01-05 03:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:12.138900 | orchestrator | 2026-01-05 03:07:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:12.140879 | orchestrator | 2026-01-05 03:07:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:12.141169 | orchestrator | 2026-01-05 03:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:15.184110 | orchestrator | 2026-01-05 03:07:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:15.186704 | orchestrator | 2026-01-05 03:07:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:15.186760 | orchestrator | 2026-01-05 03:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:18.242671 | orchestrator | 2026-01-05 03:07:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:18.243948 | orchestrator | 2026-01-05 03:07:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:18.243975 | orchestrator | 2026-01-05 03:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:21.293784 | orchestrator | 2026-01-05 03:07:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:21.295488 | orchestrator | 2026-01-05 03:07:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:21.295563 | orchestrator | 2026-01-05 03:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:24.349902 | orchestrator | 2026-01-05 03:07:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:24.351778 | orchestrator | 2026-01-05 03:07:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:24.351834 | orchestrator | 2026-01-05 03:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:27.396414 | orchestrator | 2026-01-05 03:07:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:27.398163 | orchestrator | 2026-01-05 03:07:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:27.398672 | orchestrator | 2026-01-05 03:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:30.441234 | orchestrator | 2026-01-05 03:07:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:30.442777 | orchestrator | 2026-01-05 03:07:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:30.442839 | orchestrator | 2026-01-05 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:33.497244 | orchestrator | 2026-01-05 03:07:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:33.498561 | orchestrator | 2026-01-05 03:07:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:33.498619 | orchestrator | 2026-01-05 03:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:36.549429 | orchestrator | 2026-01-05 03:07:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:36.550117 | orchestrator | 2026-01-05 03:07:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:36.550155 | orchestrator | 2026-01-05 03:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:39.599824 | orchestrator | 2026-01-05 03:07:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:39.602969 | orchestrator | 2026-01-05 03:07:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:39.603047 | orchestrator | 2026-01-05 03:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:42.650419 | orchestrator | 2026-01-05 03:07:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:42.651769 | orchestrator | 2026-01-05 03:07:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:42.651832 | orchestrator | 2026-01-05 03:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:45.700035 | orchestrator | 2026-01-05 03:07:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:45.701212 | orchestrator | 2026-01-05 03:07:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:45.701466 | orchestrator | 2026-01-05 03:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:48.748814 | orchestrator | 2026-01-05 03:07:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:48.749839 | orchestrator | 2026-01-05 03:07:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:48.750200 | orchestrator | 2026-01-05 03:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:51.805078 | orchestrator | 2026-01-05 03:07:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:51.806618 | orchestrator | 2026-01-05 03:07:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:51.806661 | orchestrator | 2026-01-05 03:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:54.849007 | orchestrator | 2026-01-05 03:07:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:54.851160 | orchestrator | 2026-01-05 03:07:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:54.851385 | orchestrator | 2026-01-05 03:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:57.896340 | orchestrator | 2026-01-05 03:07:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:07:57.898686 | orchestrator | 2026-01-05 03:07:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:07:57.898763 | orchestrator | 2026-01-05 03:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:00.951754 | orchestrator | 2026-01-05 03:08:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:00.953805 | orchestrator | 2026-01-05 03:08:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:00.953837 | orchestrator | 2026-01-05 03:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:04.008880 | orchestrator | 2026-01-05 03:08:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:04.011352 | orchestrator | 2026-01-05 03:08:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:04.011411 | orchestrator | 2026-01-05 03:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:07.056220 | orchestrator | 2026-01-05 03:08:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:07.058193 | orchestrator | 2026-01-05 03:08:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:07.058332 | orchestrator | 2026-01-05 03:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:10.103694 | orchestrator | 2026-01-05 03:08:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:10.105179 | orchestrator | 2026-01-05 03:08:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:10.105234 | orchestrator | 2026-01-05 03:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:13.156683 | orchestrator | 2026-01-05 03:08:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:13.158544 | orchestrator | 2026-01-05 03:08:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:13.158587 | orchestrator | 2026-01-05 03:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:16.199025 | orchestrator | 2026-01-05 03:08:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:16.200708 | orchestrator | 2026-01-05 03:08:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:16.200745 | orchestrator | 2026-01-05 03:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:19.250126 | orchestrator | 2026-01-05 03:08:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:19.250870 | orchestrator | 2026-01-05 03:08:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:19.250941 | orchestrator | 2026-01-05 03:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:22.307298 | orchestrator | 2026-01-05 03:08:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:22.307851 | orchestrator | 2026-01-05 03:08:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:22.307916 | orchestrator | 2026-01-05 03:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:25.359602 | orchestrator | 2026-01-05 03:08:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:25.361209 | orchestrator | 2026-01-05 03:08:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:25.361263 | orchestrator | 2026-01-05 03:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:28.410379 | orchestrator | 2026-01-05 03:08:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:28.414670 | orchestrator | 2026-01-05 03:08:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:28.414743 | orchestrator | 2026-01-05 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:31.452719 | orchestrator | 2026-01-05 03:08:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:31.453501 | orchestrator | 2026-01-05 03:08:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:31.453538 | orchestrator | 2026-01-05 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:34.497898 | orchestrator | 2026-01-05 03:08:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:34.499002 | orchestrator | 2026-01-05 03:08:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:34.499069 | orchestrator | 2026-01-05 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:37.554058 | orchestrator | 2026-01-05 03:08:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:37.555845 | orchestrator | 2026-01-05 03:08:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:37.555913 | orchestrator | 2026-01-05 03:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:40.601692 | orchestrator | 2026-01-05 03:08:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:40.603720 | orchestrator | 2026-01-05 03:08:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:40.603795 | orchestrator | 2026-01-05 03:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:43.652975 | orchestrator | 2026-01-05 03:08:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:43.653624 | orchestrator | 2026-01-05 03:08:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:43.653646 | orchestrator | 2026-01-05 03:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:46.697594 | orchestrator | 2026-01-05 03:08:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:46.700592 | orchestrator | 2026-01-05 03:08:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:46.700646 | orchestrator | 2026-01-05 03:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:49.750351 | orchestrator | 2026-01-05 03:08:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:49.752438 | orchestrator | 2026-01-05 03:08:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:49.752484 | orchestrator | 2026-01-05 03:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:52.802988 | orchestrator | 2026-01-05 03:08:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:52.805456 | orchestrator | 2026-01-05 03:08:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:52.805521 | orchestrator | 2026-01-05 03:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:55.852303 | orchestrator | 2026-01-05 03:08:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:55.853800 | orchestrator | 2026-01-05 03:08:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:55.853854 | orchestrator | 2026-01-05 03:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:58.905930 | orchestrator | 2026-01-05 03:08:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:08:58.907526 | orchestrator | 2026-01-05 03:08:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:08:58.907579 | orchestrator | 2026-01-05 03:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:01.954202 | orchestrator | 2026-01-05 03:09:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:01.954854 | orchestrator | 2026-01-05 03:09:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:01.954912 | orchestrator | 2026-01-05 03:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:05.004588 | orchestrator | 2026-01-05 03:09:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:05.006505 | orchestrator | 2026-01-05 03:09:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:05.006571 | orchestrator | 2026-01-05 03:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:08.055943 | orchestrator | 2026-01-05 03:09:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:08.056022 | orchestrator | 2026-01-05 03:09:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:08.056028 | orchestrator | 2026-01-05 03:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:11.104592 | orchestrator | 2026-01-05 03:09:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:11.106665 | orchestrator | 2026-01-05 03:09:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:11.106729 | orchestrator | 2026-01-05 03:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:14.151091 | orchestrator | 2026-01-05 03:09:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:14.154862 | orchestrator | 2026-01-05 03:09:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:14.154959 | orchestrator | 2026-01-05 03:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:17.200761 | orchestrator | 2026-01-05 03:09:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:17.201691 | orchestrator | 2026-01-05 03:09:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:17.201750 | orchestrator | 2026-01-05 03:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:20.245601 | orchestrator | 2026-01-05 03:09:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:20.249811 | orchestrator | 2026-01-05 03:09:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:20.249888 | orchestrator | 2026-01-05 03:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:23.299714 | orchestrator | 2026-01-05 03:09:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:23.301940 | orchestrator | 2026-01-05 03:09:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:23.302049 | orchestrator | 2026-01-05 03:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:26.349741 | orchestrator | 2026-01-05 03:09:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:26.353657 | orchestrator | 2026-01-05 03:09:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:26.353729 | orchestrator | 2026-01-05 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:29.402800 | orchestrator | 2026-01-05 03:09:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:29.403885 | orchestrator | 2026-01-05 03:09:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:29.403944 | orchestrator | 2026-01-05 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:32.448752 | orchestrator | 2026-01-05 03:09:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:32.450445 | orchestrator | 2026-01-05 03:09:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:32.450481 | orchestrator | 2026-01-05 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:35.499233 | orchestrator | 2026-01-05 03:09:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:35.499930 | orchestrator | 2026-01-05 03:09:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:35.500000 | orchestrator | 2026-01-05 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:38.552686 | orchestrator | 2026-01-05 03:09:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:38.554158 | orchestrator | 2026-01-05 03:09:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:38.554202 | orchestrator | 2026-01-05 03:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:41.600192 | orchestrator | 2026-01-05 03:09:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:41.601683 | orchestrator | 2026-01-05 03:09:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:41.601723 | orchestrator | 2026-01-05 03:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:44.646111 | orchestrator | 2026-01-05 03:09:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:44.648941 | orchestrator | 2026-01-05 03:09:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:44.649014 | orchestrator | 2026-01-05 03:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:47.698802 | orchestrator | 2026-01-05 03:09:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:47.700046 | orchestrator | 2026-01-05 03:09:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:47.700128 | orchestrator | 2026-01-05 03:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:50.747779 | orchestrator | 2026-01-05 03:09:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:50.749310 | orchestrator | 2026-01-05 03:09:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:50.749361 | orchestrator | 2026-01-05 03:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:53.797437 | orchestrator | 2026-01-05 03:09:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:53.798330 | orchestrator | 2026-01-05 03:09:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:53.798365 | orchestrator | 2026-01-05 03:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:56.851747 | orchestrator | 2026-01-05 03:09:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:56.854348 | orchestrator | 2026-01-05 03:09:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:56.854406 | orchestrator | 2026-01-05 03:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:59.904362 | orchestrator | 2026-01-05 03:09:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:09:59.906249 | orchestrator | 2026-01-05 03:09:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:09:59.906316 | orchestrator | 2026-01-05 03:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:02.956796 | orchestrator | 2026-01-05 03:10:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:02.958642 | orchestrator | 2026-01-05 03:10:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:02.958741 | orchestrator | 2026-01-05 03:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:06.006328 | orchestrator | 2026-01-05 03:10:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:06.007193 | orchestrator | 2026-01-05 03:10:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:06.007254 | orchestrator | 2026-01-05 03:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:09.078260 | orchestrator | 2026-01-05 03:10:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:09.079838 | orchestrator | 2026-01-05 03:10:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:09.079939 | orchestrator | 2026-01-05 03:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:12.129830 | orchestrator | 2026-01-05 03:10:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:12.131468 | orchestrator | 2026-01-05 03:10:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:12.131506 | orchestrator | 2026-01-05 03:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:15.175362 | orchestrator | 2026-01-05 03:10:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:15.177746 | orchestrator | 2026-01-05 03:10:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:15.177822 | orchestrator | 2026-01-05 03:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:18.226490 | orchestrator | 2026-01-05 03:10:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:18.229445 | orchestrator | 2026-01-05 03:10:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:18.229558 | orchestrator | 2026-01-05 03:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:21.274287 | orchestrator | 2026-01-05 03:10:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:21.276335 | orchestrator | 2026-01-05 03:10:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:21.276380 | orchestrator | 2026-01-05 03:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:24.319150 | orchestrator | 2026-01-05 03:10:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:24.320389 | orchestrator | 2026-01-05 03:10:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:24.320458 | orchestrator | 2026-01-05 03:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:27.374166 | orchestrator | 2026-01-05 03:10:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:27.375378 | orchestrator | 2026-01-05 03:10:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:27.375410 | orchestrator | 2026-01-05 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:30.433551 | orchestrator | 2026-01-05 03:10:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:30.434760 | orchestrator | 2026-01-05 03:10:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:30.434800 | orchestrator | 2026-01-05 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:33.476697 | orchestrator | 2026-01-05 03:10:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:33.480362 | orchestrator | 2026-01-05 03:10:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:33.480544 | orchestrator | 2026-01-05 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:36.531160 | orchestrator | 2026-01-05 03:10:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:36.532089 | orchestrator | 2026-01-05 03:10:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:36.532366 | orchestrator | 2026-01-05 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:39.583200 | orchestrator | 2026-01-05 03:10:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:39.585362 | orchestrator | 2026-01-05 03:10:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:39.585411 | orchestrator | 2026-01-05 03:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:42.633510 | orchestrator | 2026-01-05 03:10:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:42.636244 | orchestrator | 2026-01-05 03:10:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:42.636302 | orchestrator | 2026-01-05 03:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:45.679552 | orchestrator | 2026-01-05 03:10:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:45.681786 | orchestrator | 2026-01-05 03:10:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:45.681835 | orchestrator | 2026-01-05 03:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:48.725059 | orchestrator | 2026-01-05 03:10:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:48.726660 | orchestrator | 2026-01-05 03:10:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:48.726771 | orchestrator | 2026-01-05 03:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:51.776518 | orchestrator | 2026-01-05 03:10:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:51.778286 | orchestrator | 2026-01-05 03:10:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:51.778314 | orchestrator | 2026-01-05 03:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:54.824940 | orchestrator | 2026-01-05 03:10:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:54.825144 | orchestrator | 2026-01-05 03:10:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:54.825169 | orchestrator | 2026-01-05 03:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:57.882875 | orchestrator | 2026-01-05 03:10:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:10:57.884207 | orchestrator | 2026-01-05 03:10:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:10:57.884254 | orchestrator | 2026-01-05 03:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:00.936176 | orchestrator | 2026-01-05 03:11:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:00.937885 | orchestrator | 2026-01-05 03:11:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:00.938086 | orchestrator | 2026-01-05 03:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:03.989153 | orchestrator | 2026-01-05 03:11:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:03.990336 | orchestrator | 2026-01-05 03:11:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:03.990463 | orchestrator | 2026-01-05 03:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:07.037586 | orchestrator | 2026-01-05 03:11:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:07.040189 | orchestrator | 2026-01-05 03:11:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:07.040245 | orchestrator | 2026-01-05 03:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:10.089711 | orchestrator | 2026-01-05 03:11:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:10.091005 | orchestrator | 2026-01-05 03:11:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:10.091038 | orchestrator | 2026-01-05 03:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:13.134758 | orchestrator | 2026-01-05 03:11:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:13.136326 | orchestrator | 2026-01-05 03:11:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:13.136383 | orchestrator | 2026-01-05 03:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:16.183320 | orchestrator | 2026-01-05 03:11:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:16.183469 | orchestrator | 2026-01-05 03:11:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:16.183485 | orchestrator | 2026-01-05 03:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:19.229519 | orchestrator | 2026-01-05 03:11:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:19.231281 | orchestrator | 2026-01-05 03:11:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:19.231325 | orchestrator | 2026-01-05 03:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:22.277847 | orchestrator | 2026-01-05 03:11:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:22.278856 | orchestrator | 2026-01-05 03:11:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:22.278948 | orchestrator | 2026-01-05 03:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:25.325465 | orchestrator | 2026-01-05 03:11:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:25.327124 | orchestrator | 2026-01-05 03:11:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:25.327201 | orchestrator | 2026-01-05 03:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:28.379563 | orchestrator | 2026-01-05 03:11:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:28.381126 | orchestrator | 2026-01-05 03:11:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:28.381158 | orchestrator | 2026-01-05 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:31.428801 | orchestrator | 2026-01-05 03:11:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:31.431241 | orchestrator | 2026-01-05 03:11:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:31.432351 | orchestrator | 2026-01-05 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:34.475471 | orchestrator | 2026-01-05 03:11:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:34.477496 | orchestrator | 2026-01-05 03:11:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:34.477585 | orchestrator | 2026-01-05 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:37.534157 | orchestrator | 2026-01-05 03:11:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:37.535166 | orchestrator | 2026-01-05 03:11:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:37.535247 | orchestrator | 2026-01-05 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:40.588378 | orchestrator | 2026-01-05 03:11:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:40.589717 | orchestrator | 2026-01-05 03:11:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:40.589761 | orchestrator | 2026-01-05 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:43.639357 | orchestrator | 2026-01-05 03:11:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:43.642555 | orchestrator | 2026-01-05 03:11:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:43.642667 | orchestrator | 2026-01-05 03:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:46.689730 | orchestrator | 2026-01-05 03:11:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:46.693336 | orchestrator | 2026-01-05 03:11:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:46.693410 | orchestrator | 2026-01-05 03:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:49.742385 | orchestrator | 2026-01-05 03:11:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:49.744351 | orchestrator | 2026-01-05 03:11:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:49.744423 | orchestrator | 2026-01-05 03:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:52.798394 | orchestrator | 2026-01-05 03:11:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:52.801137 | orchestrator | 2026-01-05 03:11:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:52.801182 | orchestrator | 2026-01-05 03:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:55.855545 | orchestrator | 2026-01-05 03:11:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:55.861183 | orchestrator | 2026-01-05 03:11:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:55.861289 | orchestrator | 2026-01-05 03:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:58.921452 | orchestrator | 2026-01-05 03:11:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:11:58.923681 | orchestrator | 2026-01-05 03:11:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:11:58.923758 | orchestrator | 2026-01-05 03:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:01.972485 | orchestrator | 2026-01-05 03:12:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:01.975794 | orchestrator | 2026-01-05 03:12:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:01.975954 | orchestrator | 2026-01-05 03:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:05.026395 | orchestrator | 2026-01-05 03:12:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:05.026548 | orchestrator | 2026-01-05 03:12:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:05.026561 | orchestrator | 2026-01-05 03:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:08.071602 | orchestrator | 2026-01-05 03:12:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:08.072985 | orchestrator | 2026-01-05 03:12:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:08.073056 | orchestrator | 2026-01-05 03:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:11.126964 | orchestrator | 2026-01-05 03:12:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:11.128415 | orchestrator | 2026-01-05 03:12:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:11.128476 | orchestrator | 2026-01-05 03:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:14.171729 | orchestrator | 2026-01-05 03:12:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:14.173200 | orchestrator | 2026-01-05 03:12:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:14.173250 | orchestrator | 2026-01-05 03:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:17.217185 | orchestrator | 2026-01-05 03:12:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:17.217800 | orchestrator | 2026-01-05 03:12:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:17.217882 | orchestrator | 2026-01-05 03:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:20.257612 | orchestrator | 2026-01-05 03:12:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:20.258187 | orchestrator | 2026-01-05 03:12:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:20.258231 | orchestrator | 2026-01-05 03:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:23.307805 | orchestrator | 2026-01-05 03:12:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:23.309507 | orchestrator | 2026-01-05 03:12:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:23.309543 | orchestrator | 2026-01-05 03:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:26.347523 | orchestrator | 2026-01-05 03:12:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:26.347675 | orchestrator | 2026-01-05 03:12:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:26.347691 | orchestrator | 2026-01-05 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:29.397107 | orchestrator | 2026-01-05 03:12:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:29.398519 | orchestrator | 2026-01-05 03:12:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:29.398570 | orchestrator | 2026-01-05 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:32.446059 | orchestrator | 2026-01-05 03:12:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:32.448230 | orchestrator | 2026-01-05 03:12:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:32.448443 | orchestrator | 2026-01-05 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:35.495727 | orchestrator | 2026-01-05 03:12:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:35.496677 | orchestrator | 2026-01-05 03:12:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:35.496731 | orchestrator | 2026-01-05 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:38.552332 | orchestrator | 2026-01-05 03:12:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:38.555003 | orchestrator | 2026-01-05 03:12:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:38.555083 | orchestrator | 2026-01-05 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:41.607954 | orchestrator | 2026-01-05 03:12:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:41.610122 | orchestrator | 2026-01-05 03:12:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:41.610171 | orchestrator | 2026-01-05 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:44.658958 | orchestrator | 2026-01-05 03:12:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:44.660532 | orchestrator | 2026-01-05 03:12:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:44.660587 | orchestrator | 2026-01-05 03:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:47.709225 | orchestrator | 2026-01-05 03:12:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:47.711344 | orchestrator | 2026-01-05 03:12:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:47.711420 | orchestrator | 2026-01-05 03:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:50.761463 | orchestrator | 2026-01-05 03:12:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:50.763132 | orchestrator | 2026-01-05 03:12:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:50.763189 | orchestrator | 2026-01-05 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:53.811613 | orchestrator | 2026-01-05 03:12:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:53.813753 | orchestrator | 2026-01-05 03:12:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:53.813834 | orchestrator | 2026-01-05 03:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:56.860139 | orchestrator | 2026-01-05 03:12:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:56.860680 | orchestrator | 2026-01-05 03:12:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:56.860748 | orchestrator | 2026-01-05 03:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:59.907493 | orchestrator | 2026-01-05 03:12:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:12:59.908780 | orchestrator | 2026-01-05 03:12:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:12:59.908819 | orchestrator | 2026-01-05 03:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:02.956657 | orchestrator | 2026-01-05 03:13:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:02.960357 | orchestrator | 2026-01-05 03:13:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:02.960424 | orchestrator | 2026-01-05 03:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:06.011633 | orchestrator | 2026-01-05 03:13:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:06.011711 | orchestrator | 2026-01-05 03:13:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:06.012901 | orchestrator | 2026-01-05 03:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:09.059540 | orchestrator | 2026-01-05 03:13:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:09.061357 | orchestrator | 2026-01-05 03:13:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:09.061430 | orchestrator | 2026-01-05 03:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:12.114546 | orchestrator | 2026-01-05 03:13:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:12.115062 | orchestrator | 2026-01-05 03:13:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:12.115087 | orchestrator | 2026-01-05 03:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:15.169376 | orchestrator | 2026-01-05 03:13:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:15.170845 | orchestrator | 2026-01-05 03:13:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:15.170909 | orchestrator | 2026-01-05 03:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:18.225615 | orchestrator | 2026-01-05 03:13:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:18.227383 | orchestrator | 2026-01-05 03:13:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:18.227543 | orchestrator | 2026-01-05 03:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:21.277765 | orchestrator | 2026-01-05 03:13:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:21.281055 | orchestrator | 2026-01-05 03:13:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:21.281164 | orchestrator | 2026-01-05 03:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:24.336961 | orchestrator | 2026-01-05 03:13:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:24.337656 | orchestrator | 2026-01-05 03:13:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:24.337852 | orchestrator | 2026-01-05 03:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:27.389006 | orchestrator | 2026-01-05 03:13:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:27.392385 | orchestrator | 2026-01-05 03:13:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:27.392472 | orchestrator | 2026-01-05 03:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:30.447194 | orchestrator | 2026-01-05 03:13:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:30.450987 | orchestrator | 2026-01-05 03:13:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:30.451066 | orchestrator | 2026-01-05 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:33.504035 | orchestrator | 2026-01-05 03:13:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:33.505336 | orchestrator | 2026-01-05 03:13:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:33.505423 | orchestrator | 2026-01-05 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:36.552915 | orchestrator | 2026-01-05 03:13:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:36.554136 | orchestrator | 2026-01-05 03:13:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:36.554233 | orchestrator | 2026-01-05 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:39.606647 | orchestrator | 2026-01-05 03:13:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:39.608591 | orchestrator | 2026-01-05 03:13:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:39.608652 | orchestrator | 2026-01-05 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:42.656251 | orchestrator | 2026-01-05 03:13:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:42.657309 | orchestrator | 2026-01-05 03:13:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:42.657355 | orchestrator | 2026-01-05 03:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:45.712951 | orchestrator | 2026-01-05 03:13:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:45.714607 | orchestrator | 2026-01-05 03:13:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:45.714656 | orchestrator | 2026-01-05 03:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:48.765930 | orchestrator | 2026-01-05 03:13:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:48.768525 | orchestrator | 2026-01-05 03:13:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:48.768599 | orchestrator | 2026-01-05 03:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:51.813386 | orchestrator | 2026-01-05 03:13:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:51.815458 | orchestrator | 2026-01-05 03:13:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:51.815517 | orchestrator | 2026-01-05 03:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:54.860284 | orchestrator | 2026-01-05 03:13:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:54.861913 | orchestrator | 2026-01-05 03:13:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:54.861966 | orchestrator | 2026-01-05 03:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:57.910550 | orchestrator | 2026-01-05 03:13:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:13:57.911370 | orchestrator | 2026-01-05 03:13:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:13:57.911420 | orchestrator | 2026-01-05 03:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:00.958118 | orchestrator | 2026-01-05 03:14:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:00.960851 | orchestrator | 2026-01-05 03:14:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:00.960989 | orchestrator | 2026-01-05 03:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:04.015175 | orchestrator | 2026-01-05 03:14:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:04.016500 | orchestrator | 2026-01-05 03:14:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:04.016669 | orchestrator | 2026-01-05 03:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:07.071244 | orchestrator | 2026-01-05 03:14:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:07.073491 | orchestrator | 2026-01-05 03:14:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:07.073610 | orchestrator | 2026-01-05 03:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:10.124902 | orchestrator | 2026-01-05 03:14:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:10.126445 | orchestrator | 2026-01-05 03:14:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:10.126484 | orchestrator | 2026-01-05 03:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:13.168388 | orchestrator | 2026-01-05 03:14:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:13.169945 | orchestrator | 2026-01-05 03:14:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:13.170066 | orchestrator | 2026-01-05 03:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:16.216492 | orchestrator | 2026-01-05 03:14:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:16.218612 | orchestrator | 2026-01-05 03:14:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:16.218723 | orchestrator | 2026-01-05 03:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:19.272055 | orchestrator | 2026-01-05 03:14:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:19.274651 | orchestrator | 2026-01-05 03:14:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:19.274751 | orchestrator | 2026-01-05 03:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:22.318922 | orchestrator | 2026-01-05 03:14:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:22.320445 | orchestrator | 2026-01-05 03:14:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:22.320469 | orchestrator | 2026-01-05 03:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:25.371778 | orchestrator | 2026-01-05 03:14:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:25.371864 | orchestrator | 2026-01-05 03:14:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:25.371875 | orchestrator | 2026-01-05 03:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:28.424333 | orchestrator | 2026-01-05 03:14:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:28.426137 | orchestrator | 2026-01-05 03:14:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:28.426218 | orchestrator | 2026-01-05 03:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:31.476211 | orchestrator | 2026-01-05 03:14:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:31.478304 | orchestrator | 2026-01-05 03:14:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:31.478386 | orchestrator | 2026-01-05 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:34.520270 | orchestrator | 2026-01-05 03:14:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:34.521476 | orchestrator | 2026-01-05 03:14:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:34.521529 | orchestrator | 2026-01-05 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:37.578784 | orchestrator | 2026-01-05 03:14:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:37.580752 | orchestrator | 2026-01-05 03:14:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:37.580802 | orchestrator | 2026-01-05 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:40.632164 | orchestrator | 2026-01-05 03:14:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:40.634874 | orchestrator | 2026-01-05 03:14:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:40.634940 | orchestrator | 2026-01-05 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:43.683728 | orchestrator | 2026-01-05 03:14:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:43.685785 | orchestrator | 2026-01-05 03:14:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:43.685860 | orchestrator | 2026-01-05 03:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:46.732495 | orchestrator | 2026-01-05 03:14:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:46.734694 | orchestrator | 2026-01-05 03:14:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:46.734753 | orchestrator | 2026-01-05 03:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:49.782417 | orchestrator | 2026-01-05 03:14:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:49.784883 | orchestrator | 2026-01-05 03:14:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:49.784953 | orchestrator | 2026-01-05 03:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:52.835762 | orchestrator | 2026-01-05 03:14:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:52.837580 | orchestrator | 2026-01-05 03:14:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:52.837652 | orchestrator | 2026-01-05 03:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:55.883434 | orchestrator | 2026-01-05 03:14:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:55.885971 | orchestrator | 2026-01-05 03:14:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:55.886064 | orchestrator | 2026-01-05 03:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:58.934366 | orchestrator | 2026-01-05 03:14:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:14:58.935591 | orchestrator | 2026-01-05 03:14:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:14:58.935704 | orchestrator | 2026-01-05 03:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:01.984750 | orchestrator | 2026-01-05 03:15:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:01.986325 | orchestrator | 2026-01-05 03:15:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:01.986387 | orchestrator | 2026-01-05 03:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:05.039851 | orchestrator | 2026-01-05 03:15:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:05.040731 | orchestrator | 2026-01-05 03:15:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:05.040854 | orchestrator | 2026-01-05 03:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:08.085771 | orchestrator | 2026-01-05 03:15:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:08.091246 | orchestrator | 2026-01-05 03:15:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:08.091315 | orchestrator | 2026-01-05 03:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:11.138440 | orchestrator | 2026-01-05 03:15:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:11.140457 | orchestrator | 2026-01-05 03:15:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:11.140558 | orchestrator | 2026-01-05 03:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:14.186241 | orchestrator | 2026-01-05 03:15:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:14.189186 | orchestrator | 2026-01-05 03:15:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:14.189260 | orchestrator | 2026-01-05 03:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:17.238662 | orchestrator | 2026-01-05 03:15:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:17.240005 | orchestrator | 2026-01-05 03:15:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:17.240266 | orchestrator | 2026-01-05 03:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:20.286704 | orchestrator | 2026-01-05 03:15:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:20.287504 | orchestrator | 2026-01-05 03:15:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:20.287551 | orchestrator | 2026-01-05 03:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:23.329728 | orchestrator | 2026-01-05 03:15:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:23.330695 | orchestrator | 2026-01-05 03:15:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:23.330731 | orchestrator | 2026-01-05 03:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:26.379514 | orchestrator | 2026-01-05 03:15:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:26.380207 | orchestrator | 2026-01-05 03:15:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:26.380231 | orchestrator | 2026-01-05 03:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:29.439358 | orchestrator | 2026-01-05 03:15:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:29.439862 | orchestrator | 2026-01-05 03:15:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:29.439912 | orchestrator | 2026-01-05 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:32.496945 | orchestrator | 2026-01-05 03:15:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:32.498324 | orchestrator | 2026-01-05 03:15:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:32.498487 | orchestrator | 2026-01-05 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:35.554642 | orchestrator | 2026-01-05 03:15:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:35.555540 | orchestrator | 2026-01-05 03:15:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:35.555618 | orchestrator | 2026-01-05 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:38.606658 | orchestrator | 2026-01-05 03:15:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:38.608888 | orchestrator | 2026-01-05 03:15:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:38.608943 | orchestrator | 2026-01-05 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:41.659229 | orchestrator | 2026-01-05 03:15:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:41.660842 | orchestrator | 2026-01-05 03:15:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:41.660904 | orchestrator | 2026-01-05 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:44.703079 | orchestrator | 2026-01-05 03:15:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:44.703761 | orchestrator | 2026-01-05 03:15:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:44.703813 | orchestrator | 2026-01-05 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:47.750486 | orchestrator | 2026-01-05 03:15:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:47.752283 | orchestrator | 2026-01-05 03:15:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:47.752353 | orchestrator | 2026-01-05 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:50.800724 | orchestrator | 2026-01-05 03:15:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:50.802063 | orchestrator | 2026-01-05 03:15:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:50.802121 | orchestrator | 2026-01-05 03:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:53.859284 | orchestrator | 2026-01-05 03:15:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:53.861683 | orchestrator | 2026-01-05 03:15:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:53.861752 | orchestrator | 2026-01-05 03:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:56.916168 | orchestrator | 2026-01-05 03:15:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:56.918770 | orchestrator | 2026-01-05 03:15:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:56.918850 | orchestrator | 2026-01-05 03:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:59.959711 | orchestrator | 2026-01-05 03:15:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:15:59.962350 | orchestrator | 2026-01-05 03:15:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:15:59.962523 | orchestrator | 2026-01-05 03:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:03.010123 | orchestrator | 2026-01-05 03:16:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:03.012954 | orchestrator | 2026-01-05 03:16:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:03.013037 | orchestrator | 2026-01-05 03:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:06.061326 | orchestrator | 2026-01-05 03:16:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:06.065335 | orchestrator | 2026-01-05 03:16:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:06.065408 | orchestrator | 2026-01-05 03:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:09.128665 | orchestrator | 2026-01-05 03:16:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:09.133968 | orchestrator | 2026-01-05 03:16:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:09.134144 | orchestrator | 2026-01-05 03:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:12.180778 | orchestrator | 2026-01-05 03:16:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:12.182209 | orchestrator | 2026-01-05 03:16:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:12.182608 | orchestrator | 2026-01-05 03:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:15.230421 | orchestrator | 2026-01-05 03:16:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:15.232873 | orchestrator | 2026-01-05 03:16:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:15.232918 | orchestrator | 2026-01-05 03:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:18.280878 | orchestrator | 2026-01-05 03:16:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:18.281282 | orchestrator | 2026-01-05 03:16:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:18.281623 | orchestrator | 2026-01-05 03:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:21.321471 | orchestrator | 2026-01-05 03:16:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:21.322057 | orchestrator | 2026-01-05 03:16:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:21.322102 | orchestrator | 2026-01-05 03:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:24.375961 | orchestrator | 2026-01-05 03:16:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:24.376969 | orchestrator | 2026-01-05 03:16:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:24.377021 | orchestrator | 2026-01-05 03:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:27.422261 | orchestrator | 2026-01-05 03:16:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:27.425031 | orchestrator | 2026-01-05 03:16:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:27.425109 | orchestrator | 2026-01-05 03:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:30.475107 | orchestrator | 2026-01-05 03:16:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:30.476619 | orchestrator | 2026-01-05 03:16:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:30.476657 | orchestrator | 2026-01-05 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:33.525165 | orchestrator | 2026-01-05 03:16:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:33.525293 | orchestrator | 2026-01-05 03:16:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:33.525333 | orchestrator | 2026-01-05 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:36.567453 | orchestrator | 2026-01-05 03:16:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:36.568777 | orchestrator | 2026-01-05 03:16:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:36.568838 | orchestrator | 2026-01-05 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:39.618078 | orchestrator | 2026-01-05 03:16:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:39.619547 | orchestrator | 2026-01-05 03:16:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:39.619582 | orchestrator | 2026-01-05 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:42.662467 | orchestrator | 2026-01-05 03:16:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:42.663025 | orchestrator | 2026-01-05 03:16:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:42.663060 | orchestrator | 2026-01-05 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:45.707002 | orchestrator | 2026-01-05 03:16:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:45.709094 | orchestrator | 2026-01-05 03:16:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:45.709461 | orchestrator | 2026-01-05 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:48.757859 | orchestrator | 2026-01-05 03:16:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:48.758994 | orchestrator | 2026-01-05 03:16:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:48.759101 | orchestrator | 2026-01-05 03:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:51.806154 | orchestrator | 2026-01-05 03:16:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:51.807768 | orchestrator | 2026-01-05 03:16:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:51.807872 | orchestrator | 2026-01-05 03:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:54.854974 | orchestrator | 2026-01-05 03:16:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:54.857118 | orchestrator | 2026-01-05 03:16:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:54.857206 | orchestrator | 2026-01-05 03:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:57.910330 | orchestrator | 2026-01-05 03:16:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:16:57.913121 | orchestrator | 2026-01-05 03:16:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:16:57.913229 | orchestrator | 2026-01-05 03:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:00.965060 | orchestrator | 2026-01-05 03:17:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:00.966874 | orchestrator | 2026-01-05 03:17:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:00.966985 | orchestrator | 2026-01-05 03:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:04.027467 | orchestrator | 2026-01-05 03:17:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:04.029707 | orchestrator | 2026-01-05 03:17:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:04.029838 | orchestrator | 2026-01-05 03:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:07.078993 | orchestrator | 2026-01-05 03:17:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:07.079624 | orchestrator | 2026-01-05 03:17:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:07.079694 | orchestrator | 2026-01-05 03:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:10.124000 | orchestrator | 2026-01-05 03:17:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:10.126123 | orchestrator | 2026-01-05 03:17:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:10.126203 | orchestrator | 2026-01-05 03:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:13.174523 | orchestrator | 2026-01-05 03:17:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:13.175665 | orchestrator | 2026-01-05 03:17:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:13.175706 | orchestrator | 2026-01-05 03:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:16.227635 | orchestrator | 2026-01-05 03:17:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:16.229337 | orchestrator | 2026-01-05 03:17:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:16.229573 | orchestrator | 2026-01-05 03:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:19.281426 | orchestrator | 2026-01-05 03:17:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:19.282521 | orchestrator | 2026-01-05 03:17:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:19.282585 | orchestrator | 2026-01-05 03:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:22.334383 | orchestrator | 2026-01-05 03:17:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:22.336302 | orchestrator | 2026-01-05 03:17:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:22.336338 | orchestrator | 2026-01-05 03:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:25.392406 | orchestrator | 2026-01-05 03:17:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:25.394377 | orchestrator | 2026-01-05 03:17:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:25.394453 | orchestrator | 2026-01-05 03:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:28.447146 | orchestrator | 2026-01-05 03:17:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:28.449525 | orchestrator | 2026-01-05 03:17:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:28.449600 | orchestrator | 2026-01-05 03:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:31.499529 | orchestrator | 2026-01-05 03:17:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:31.502126 | orchestrator | 2026-01-05 03:17:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:31.502251 | orchestrator | 2026-01-05 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:34.555788 | orchestrator | 2026-01-05 03:17:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:34.555886 | orchestrator | 2026-01-05 03:17:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:34.555897 | orchestrator | 2026-01-05 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:37.598291 | orchestrator | 2026-01-05 03:17:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:37.598860 | orchestrator | 2026-01-05 03:17:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:37.599343 | orchestrator | 2026-01-05 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:40.652567 | orchestrator | 2026-01-05 03:17:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:40.654857 | orchestrator | 2026-01-05 03:17:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:40.654934 | orchestrator | 2026-01-05 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:43.708021 | orchestrator | 2026-01-05 03:17:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:43.709708 | orchestrator | 2026-01-05 03:17:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:43.709766 | orchestrator | 2026-01-05 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:46.765220 | orchestrator | 2026-01-05 03:17:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:46.766354 | orchestrator | 2026-01-05 03:17:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:46.766416 | orchestrator | 2026-01-05 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:49.849297 | orchestrator | 2026-01-05 03:17:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:49.851087 | orchestrator | 2026-01-05 03:17:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:49.851132 | orchestrator | 2026-01-05 03:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:52.897855 | orchestrator | 2026-01-05 03:17:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:52.899023 | orchestrator | 2026-01-05 03:17:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:52.899595 | orchestrator | 2026-01-05 03:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:55.949569 | orchestrator | 2026-01-05 03:17:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:55.953746 | orchestrator | 2026-01-05 03:17:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:55.953808 | orchestrator | 2026-01-05 03:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:59.019630 | orchestrator | 2026-01-05 03:17:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:17:59.021935 | orchestrator | 2026-01-05 03:17:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:17:59.022062 | orchestrator | 2026-01-05 03:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:02.068192 | orchestrator | 2026-01-05 03:18:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:02.070887 | orchestrator | 2026-01-05 03:18:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:02.070966 | orchestrator | 2026-01-05 03:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:05.115735 | orchestrator | 2026-01-05 03:18:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:05.116949 | orchestrator | 2026-01-05 03:18:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:05.116979 | orchestrator | 2026-01-05 03:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:08.164460 | orchestrator | 2026-01-05 03:18:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:08.164816 | orchestrator | 2026-01-05 03:18:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:08.164840 | orchestrator | 2026-01-05 03:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:11.219419 | orchestrator | 2026-01-05 03:18:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:11.221746 | orchestrator | 2026-01-05 03:18:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:11.221822 | orchestrator | 2026-01-05 03:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:14.268023 | orchestrator | 2026-01-05 03:18:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:14.269243 | orchestrator | 2026-01-05 03:18:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:14.269316 | orchestrator | 2026-01-05 03:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:17.311271 | orchestrator | 2026-01-05 03:18:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:17.312185 | orchestrator | 2026-01-05 03:18:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:17.312221 | orchestrator | 2026-01-05 03:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:20.363563 | orchestrator | 2026-01-05 03:18:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:20.363677 | orchestrator | 2026-01-05 03:18:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:20.363691 | orchestrator | 2026-01-05 03:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:23.411830 | orchestrator | 2026-01-05 03:18:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:23.414075 | orchestrator | 2026-01-05 03:18:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:23.414198 | orchestrator | 2026-01-05 03:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:26.465830 | orchestrator | 2026-01-05 03:18:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:26.467524 | orchestrator | 2026-01-05 03:18:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:26.467660 | orchestrator | 2026-01-05 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:29.521499 | orchestrator | 2026-01-05 03:18:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:29.523027 | orchestrator | 2026-01-05 03:18:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:29.523074 | orchestrator | 2026-01-05 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:32.569265 | orchestrator | 2026-01-05 03:18:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:32.571486 | orchestrator | 2026-01-05 03:18:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:32.571610 | orchestrator | 2026-01-05 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:35.620710 | orchestrator | 2026-01-05 03:18:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:35.621875 | orchestrator | 2026-01-05 03:18:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:35.621911 | orchestrator | 2026-01-05 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:38.670520 | orchestrator | 2026-01-05 03:18:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:38.672260 | orchestrator | 2026-01-05 03:18:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:38.672328 | orchestrator | 2026-01-05 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:41.724511 | orchestrator | 2026-01-05 03:18:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:41.726122 | orchestrator | 2026-01-05 03:18:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:41.726221 | orchestrator | 2026-01-05 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:44.781109 | orchestrator | 2026-01-05 03:18:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:44.783382 | orchestrator | 2026-01-05 03:18:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:44.783570 | orchestrator | 2026-01-05 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:47.830724 | orchestrator | 2026-01-05 03:18:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:47.832790 | orchestrator | 2026-01-05 03:18:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:47.832878 | orchestrator | 2026-01-05 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:50.882906 | orchestrator | 2026-01-05 03:18:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:50.883575 | orchestrator | 2026-01-05 03:18:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:50.883708 | orchestrator | 2026-01-05 03:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:53.931027 | orchestrator | 2026-01-05 03:18:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:53.934086 | orchestrator | 2026-01-05 03:18:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:53.934176 | orchestrator | 2026-01-05 03:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:56.980703 | orchestrator | 2026-01-05 03:18:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:18:56.982283 | orchestrator | 2026-01-05 03:18:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:18:56.982499 | orchestrator | 2026-01-05 03:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:00.057684 | orchestrator | 2026-01-05 03:19:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:00.058882 | orchestrator | 2026-01-05 03:19:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:00.058941 | orchestrator | 2026-01-05 03:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:03.100904 | orchestrator | 2026-01-05 03:19:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:03.101737 | orchestrator | 2026-01-05 03:19:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:03.101775 | orchestrator | 2026-01-05 03:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:06.146449 | orchestrator | 2026-01-05 03:19:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:06.146726 | orchestrator | 2026-01-05 03:19:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:06.146758 | orchestrator | 2026-01-05 03:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:09.193495 | orchestrator | 2026-01-05 03:19:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:09.196845 | orchestrator | 2026-01-05 03:19:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:09.196920 | orchestrator | 2026-01-05 03:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:12.243953 | orchestrator | 2026-01-05 03:19:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:12.244946 | orchestrator | 2026-01-05 03:19:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:12.244995 | orchestrator | 2026-01-05 03:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:15.295892 | orchestrator | 2026-01-05 03:19:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:15.295986 | orchestrator | 2026-01-05 03:19:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:15.295995 | orchestrator | 2026-01-05 03:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:18.340710 | orchestrator | 2026-01-05 03:19:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:18.342919 | orchestrator | 2026-01-05 03:19:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:18.342990 | orchestrator | 2026-01-05 03:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:21.404040 | orchestrator | 2026-01-05 03:19:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:21.407127 | orchestrator | 2026-01-05 03:19:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:21.407196 | orchestrator | 2026-01-05 03:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:24.462639 | orchestrator | 2026-01-05 03:19:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:24.464551 | orchestrator | 2026-01-05 03:19:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:24.464623 | orchestrator | 2026-01-05 03:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:27.509303 | orchestrator | 2026-01-05 03:19:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:27.509832 | orchestrator | 2026-01-05 03:19:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:27.509881 | orchestrator | 2026-01-05 03:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:30.557890 | orchestrator | 2026-01-05 03:19:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:30.559204 | orchestrator | 2026-01-05 03:19:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:30.559281 | orchestrator | 2026-01-05 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:33.606753 | orchestrator | 2026-01-05 03:19:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:33.608088 | orchestrator | 2026-01-05 03:19:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:33.608137 | orchestrator | 2026-01-05 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:36.654327 | orchestrator | 2026-01-05 03:19:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:36.656774 | orchestrator | 2026-01-05 03:19:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:36.656859 | orchestrator | 2026-01-05 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:39.707255 | orchestrator | 2026-01-05 03:19:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:39.709095 | orchestrator | 2026-01-05 03:19:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:39.709144 | orchestrator | 2026-01-05 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:42.750753 | orchestrator | 2026-01-05 03:19:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:42.752196 | orchestrator | 2026-01-05 03:19:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:42.752251 | orchestrator | 2026-01-05 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:45.805674 | orchestrator | 2026-01-05 03:19:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:45.806636 | orchestrator | 2026-01-05 03:19:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:45.806680 | orchestrator | 2026-01-05 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:48.853266 | orchestrator | 2026-01-05 03:19:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:48.854625 | orchestrator | 2026-01-05 03:19:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:48.854848 | orchestrator | 2026-01-05 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:51.903749 | orchestrator | 2026-01-05 03:19:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:51.905206 | orchestrator | 2026-01-05 03:19:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:51.905287 | orchestrator | 2026-01-05 03:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:54.952881 | orchestrator | 2026-01-05 03:19:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:54.954766 | orchestrator | 2026-01-05 03:19:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:54.954853 | orchestrator | 2026-01-05 03:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:58.012216 | orchestrator | 2026-01-05 03:19:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:19:58.016895 | orchestrator | 2026-01-05 03:19:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:19:58.016965 | orchestrator | 2026-01-05 03:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:01.070604 | orchestrator | 2026-01-05 03:20:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:01.074181 | orchestrator | 2026-01-05 03:20:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:01.074278 | orchestrator | 2026-01-05 03:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:04.121509 | orchestrator | 2026-01-05 03:20:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:04.123673 | orchestrator | 2026-01-05 03:20:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:04.123735 | orchestrator | 2026-01-05 03:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:07.168500 | orchestrator | 2026-01-05 03:20:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:07.170253 | orchestrator | 2026-01-05 03:20:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:07.170871 | orchestrator | 2026-01-05 03:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:10.217715 | orchestrator | 2026-01-05 03:20:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:10.218712 | orchestrator | 2026-01-05 03:20:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:10.218815 | orchestrator | 2026-01-05 03:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:13.264826 | orchestrator | 2026-01-05 03:20:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:13.266768 | orchestrator | 2026-01-05 03:20:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:13.266848 | orchestrator | 2026-01-05 03:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:16.319112 | orchestrator | 2026-01-05 03:20:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:16.321584 | orchestrator | 2026-01-05 03:20:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:16.321662 | orchestrator | 2026-01-05 03:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:19.366111 | orchestrator | 2026-01-05 03:20:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:19.367470 | orchestrator | 2026-01-05 03:20:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:19.367493 | orchestrator | 2026-01-05 03:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:22.416636 | orchestrator | 2026-01-05 03:20:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:22.418798 | orchestrator | 2026-01-05 03:20:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:22.418871 | orchestrator | 2026-01-05 03:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:25.466460 | orchestrator | 2026-01-05 03:20:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:25.466670 | orchestrator | 2026-01-05 03:20:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:25.466686 | orchestrator | 2026-01-05 03:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:28.514624 | orchestrator | 2026-01-05 03:20:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:28.516003 | orchestrator | 2026-01-05 03:20:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:28.516041 | orchestrator | 2026-01-05 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:31.570844 | orchestrator | 2026-01-05 03:20:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:31.572601 | orchestrator | 2026-01-05 03:20:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:31.572675 | orchestrator | 2026-01-05 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:34.622146 | orchestrator | 2026-01-05 03:20:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:34.624022 | orchestrator | 2026-01-05 03:20:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:34.624140 | orchestrator | 2026-01-05 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:37.672993 | orchestrator | 2026-01-05 03:20:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:37.673590 | orchestrator | 2026-01-05 03:20:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:37.673640 | orchestrator | 2026-01-05 03:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:40.719670 | orchestrator | 2026-01-05 03:20:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:40.720821 | orchestrator | 2026-01-05 03:20:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:40.720868 | orchestrator | 2026-01-05 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:43.772973 | orchestrator | 2026-01-05 03:20:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:43.775245 | orchestrator | 2026-01-05 03:20:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:43.775361 | orchestrator | 2026-01-05 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:46.820771 | orchestrator | 2026-01-05 03:20:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:46.821432 | orchestrator | 2026-01-05 03:20:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:46.821515 | orchestrator | 2026-01-05 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:49.865355 | orchestrator | 2026-01-05 03:20:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:49.867254 | orchestrator | 2026-01-05 03:20:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:49.867290 | orchestrator | 2026-01-05 03:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:52.909739 | orchestrator | 2026-01-05 03:20:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:52.910465 | orchestrator | 2026-01-05 03:20:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:52.910524 | orchestrator | 2026-01-05 03:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:55.956650 | orchestrator | 2026-01-05 03:20:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:55.958267 | orchestrator | 2026-01-05 03:20:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:55.958453 | orchestrator | 2026-01-05 03:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:59.004025 | orchestrator | 2026-01-05 03:20:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:20:59.005846 | orchestrator | 2026-01-05 03:20:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:20:59.005953 | orchestrator | 2026-01-05 03:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:02.057349 | orchestrator | 2026-01-05 03:21:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:02.058115 | orchestrator | 2026-01-05 03:21:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:02.058155 | orchestrator | 2026-01-05 03:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:05.106593 | orchestrator | 2026-01-05 03:21:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:05.108215 | orchestrator | 2026-01-05 03:21:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:05.108308 | orchestrator | 2026-01-05 03:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:08.160968 | orchestrator | 2026-01-05 03:21:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:08.161643 | orchestrator | 2026-01-05 03:21:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:08.161677 | orchestrator | 2026-01-05 03:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:11.209142 | orchestrator | 2026-01-05 03:21:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:11.211588 | orchestrator | 2026-01-05 03:21:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:11.211679 | orchestrator | 2026-01-05 03:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:14.265221 | orchestrator | 2026-01-05 03:21:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:14.267969 | orchestrator | 2026-01-05 03:21:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:14.268031 | orchestrator | 2026-01-05 03:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:17.316924 | orchestrator | 2026-01-05 03:21:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:17.319431 | orchestrator | 2026-01-05 03:21:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:17.319526 | orchestrator | 2026-01-05 03:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:20.369860 | orchestrator | 2026-01-05 03:21:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:20.370676 | orchestrator | 2026-01-05 03:21:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:20.371066 | orchestrator | 2026-01-05 03:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:23.420944 | orchestrator | 2026-01-05 03:21:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:23.422910 | orchestrator | 2026-01-05 03:21:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:23.422999 | orchestrator | 2026-01-05 03:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:26.471449 | orchestrator | 2026-01-05 03:21:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:26.474542 | orchestrator | 2026-01-05 03:21:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:26.474599 | orchestrator | 2026-01-05 03:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:29.525127 | orchestrator | 2026-01-05 03:21:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:29.527418 | orchestrator | 2026-01-05 03:21:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:29.527496 | orchestrator | 2026-01-05 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:32.579225 | orchestrator | 2026-01-05 03:21:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:32.581387 | orchestrator | 2026-01-05 03:21:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:32.581516 | orchestrator | 2026-01-05 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:35.622403 | orchestrator | 2026-01-05 03:21:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:35.623633 | orchestrator | 2026-01-05 03:21:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:35.623685 | orchestrator | 2026-01-05 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:38.672691 | orchestrator | 2026-01-05 03:21:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:38.673317 | orchestrator | 2026-01-05 03:21:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:38.673378 | orchestrator | 2026-01-05 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:41.718470 | orchestrator | 2026-01-05 03:21:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:41.720570 | orchestrator | 2026-01-05 03:21:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:41.720650 | orchestrator | 2026-01-05 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:44.771310 | orchestrator | 2026-01-05 03:21:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:44.774171 | orchestrator | 2026-01-05 03:21:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:44.774239 | orchestrator | 2026-01-05 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:47.829624 | orchestrator | 2026-01-05 03:21:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:47.832222 | orchestrator | 2026-01-05 03:21:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:47.832333 | orchestrator | 2026-01-05 03:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:50.876035 | orchestrator | 2026-01-05 03:21:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:50.876681 | orchestrator | 2026-01-05 03:21:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:50.876704 | orchestrator | 2026-01-05 03:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:53.930441 | orchestrator | 2026-01-05 03:21:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:53.931808 | orchestrator | 2026-01-05 03:21:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:53.931868 | orchestrator | 2026-01-05 03:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:56.985646 | orchestrator | 2026-01-05 03:21:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:21:56.989155 | orchestrator | 2026-01-05 03:21:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:21:56.989230 | orchestrator | 2026-01-05 03:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:00.045627 | orchestrator | 2026-01-05 03:22:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:00.046722 | orchestrator | 2026-01-05 03:22:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:00.046752 | orchestrator | 2026-01-05 03:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:03.091046 | orchestrator | 2026-01-05 03:22:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:03.094370 | orchestrator | 2026-01-05 03:22:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:03.094493 | orchestrator | 2026-01-05 03:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:06.139940 | orchestrator | 2026-01-05 03:22:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:06.141495 | orchestrator | 2026-01-05 03:22:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:06.141584 | orchestrator | 2026-01-05 03:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:09.179766 | orchestrator | 2026-01-05 03:22:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:09.180442 | orchestrator | 2026-01-05 03:22:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:09.180531 | orchestrator | 2026-01-05 03:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:12.228743 | orchestrator | 2026-01-05 03:22:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:12.229364 | orchestrator | 2026-01-05 03:22:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:12.229393 | orchestrator | 2026-01-05 03:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:15.276460 | orchestrator | 2026-01-05 03:22:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:15.278779 | orchestrator | 2026-01-05 03:22:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:15.278860 | orchestrator | 2026-01-05 03:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:18.339925 | orchestrator | 2026-01-05 03:22:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:18.341284 | orchestrator | 2026-01-05 03:22:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:18.341407 | orchestrator | 2026-01-05 03:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:21.388663 | orchestrator | 2026-01-05 03:22:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:21.390406 | orchestrator | 2026-01-05 03:22:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:21.390481 | orchestrator | 2026-01-05 03:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:24.436497 | orchestrator | 2026-01-05 03:22:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:24.438133 | orchestrator | 2026-01-05 03:22:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:24.438272 | orchestrator | 2026-01-05 03:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:27.495475 | orchestrator | 2026-01-05 03:22:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:27.496785 | orchestrator | 2026-01-05 03:22:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:27.496822 | orchestrator | 2026-01-05 03:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:30.543079 | orchestrator | 2026-01-05 03:22:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:30.544646 | orchestrator | 2026-01-05 03:22:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:30.544691 | orchestrator | 2026-01-05 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:33.592541 | orchestrator | 2026-01-05 03:22:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:33.593629 | orchestrator | 2026-01-05 03:22:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:33.593668 | orchestrator | 2026-01-05 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:36.644038 | orchestrator | 2026-01-05 03:22:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:36.646773 | orchestrator | 2026-01-05 03:22:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:36.646904 | orchestrator | 2026-01-05 03:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:39.700444 | orchestrator | 2026-01-05 03:22:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:39.701083 | orchestrator | 2026-01-05 03:22:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:39.701392 | orchestrator | 2026-01-05 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:42.753129 | orchestrator | 2026-01-05 03:22:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:42.755031 | orchestrator | 2026-01-05 03:22:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:42.755152 | orchestrator | 2026-01-05 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:45.801483 | orchestrator | 2026-01-05 03:22:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:45.801734 | orchestrator | 2026-01-05 03:22:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:45.802118 | orchestrator | 2026-01-05 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:48.856900 | orchestrator | 2026-01-05 03:22:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:48.857665 | orchestrator | 2026-01-05 03:22:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:48.857714 | orchestrator | 2026-01-05 03:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:51.902160 | orchestrator | 2026-01-05 03:22:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:51.902990 | orchestrator | 2026-01-05 03:22:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:51.903034 | orchestrator | 2026-01-05 03:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:54.951435 | orchestrator | 2026-01-05 03:22:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:54.953462 | orchestrator | 2026-01-05 03:22:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:54.953509 | orchestrator | 2026-01-05 03:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:58.015141 | orchestrator | 2026-01-05 03:22:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:22:58.015389 | orchestrator | 2026-01-05 03:22:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:22:58.015452 | orchestrator | 2026-01-05 03:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:01.060029 | orchestrator | 2026-01-05 03:23:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:01.062317 | orchestrator | 2026-01-05 03:23:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:01.062390 | orchestrator | 2026-01-05 03:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:04.116034 | orchestrator | 2026-01-05 03:23:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:04.117436 | orchestrator | 2026-01-05 03:23:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:04.117564 | orchestrator | 2026-01-05 03:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:07.167569 | orchestrator | 2026-01-05 03:23:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:07.170093 | orchestrator | 2026-01-05 03:23:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:07.170259 | orchestrator | 2026-01-05 03:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:10.223827 | orchestrator | 2026-01-05 03:23:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:10.225664 | orchestrator | 2026-01-05 03:23:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:10.225751 | orchestrator | 2026-01-05 03:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:13.281317 | orchestrator | 2026-01-05 03:23:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:13.281884 | orchestrator | 2026-01-05 03:23:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:13.281926 | orchestrator | 2026-01-05 03:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:16.333584 | orchestrator | 2026-01-05 03:23:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:16.334776 | orchestrator | 2026-01-05 03:23:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:16.334816 | orchestrator | 2026-01-05 03:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:19.390653 | orchestrator | 2026-01-05 03:23:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:19.392881 | orchestrator | 2026-01-05 03:23:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:19.392937 | orchestrator | 2026-01-05 03:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:22.439882 | orchestrator | 2026-01-05 03:23:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:22.441005 | orchestrator | 2026-01-05 03:23:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:22.441065 | orchestrator | 2026-01-05 03:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:25.492131 | orchestrator | 2026-01-05 03:23:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:25.493909 | orchestrator | 2026-01-05 03:23:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:25.494061 | orchestrator | 2026-01-05 03:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:28.544655 | orchestrator | 2026-01-05 03:23:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:28.546646 | orchestrator | 2026-01-05 03:23:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:28.546733 | orchestrator | 2026-01-05 03:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:31.600463 | orchestrator | 2026-01-05 03:23:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:31.601798 | orchestrator | 2026-01-05 03:23:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:31.601847 | orchestrator | 2026-01-05 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:34.647582 | orchestrator | 2026-01-05 03:23:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:34.649346 | orchestrator | 2026-01-05 03:23:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:34.649390 | orchestrator | 2026-01-05 03:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:37.701153 | orchestrator | 2026-01-05 03:23:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:37.702823 | orchestrator | 2026-01-05 03:23:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:37.702909 | orchestrator | 2026-01-05 03:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:40.752130 | orchestrator | 2026-01-05 03:23:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:40.754495 | orchestrator | 2026-01-05 03:23:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:40.754579 | orchestrator | 2026-01-05 03:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:43.804958 | orchestrator | 2026-01-05 03:23:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:43.806724 | orchestrator | 2026-01-05 03:23:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:43.806810 | orchestrator | 2026-01-05 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:46.850580 | orchestrator | 2026-01-05 03:23:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:46.853015 | orchestrator | 2026-01-05 03:23:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:46.853121 | orchestrator | 2026-01-05 03:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:49.888044 | orchestrator | 2026-01-05 03:23:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:49.888386 | orchestrator | 2026-01-05 03:23:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:49.888496 | orchestrator | 2026-01-05 03:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:52.947710 | orchestrator | 2026-01-05 03:23:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:52.949731 | orchestrator | 2026-01-05 03:23:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:52.949802 | orchestrator | 2026-01-05 03:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:56.003110 | orchestrator | 2026-01-05 03:23:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:56.005536 | orchestrator | 2026-01-05 03:23:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:56.005610 | orchestrator | 2026-01-05 03:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:59.062403 | orchestrator | 2026-01-05 03:23:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:23:59.063927 | orchestrator | 2026-01-05 03:23:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:23:59.064030 | orchestrator | 2026-01-05 03:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:02.106803 | orchestrator | 2026-01-05 03:24:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:02.107469 | orchestrator | 2026-01-05 03:24:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:02.107561 | orchestrator | 2026-01-05 03:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:05.151395 | orchestrator | 2026-01-05 03:24:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:05.154001 | orchestrator | 2026-01-05 03:24:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:05.154130 | orchestrator | 2026-01-05 03:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:08.203394 | orchestrator | 2026-01-05 03:24:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:08.205511 | orchestrator | 2026-01-05 03:24:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:08.205760 | orchestrator | 2026-01-05 03:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:11.248749 | orchestrator | 2026-01-05 03:24:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:11.250568 | orchestrator | 2026-01-05 03:24:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:11.250673 | orchestrator | 2026-01-05 03:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:14.303398 | orchestrator | 2026-01-05 03:24:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:14.307007 | orchestrator | 2026-01-05 03:24:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:14.307100 | orchestrator | 2026-01-05 03:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:17.352982 | orchestrator | 2026-01-05 03:24:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:17.354287 | orchestrator | 2026-01-05 03:24:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:17.354356 | orchestrator | 2026-01-05 03:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:20.399945 | orchestrator | 2026-01-05 03:24:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:20.401683 | orchestrator | 2026-01-05 03:24:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:20.401744 | orchestrator | 2026-01-05 03:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:23.451852 | orchestrator | 2026-01-05 03:24:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:23.453368 | orchestrator | 2026-01-05 03:24:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:23.453465 | orchestrator | 2026-01-05 03:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:26.506119 | orchestrator | 2026-01-05 03:24:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:26.508239 | orchestrator | 2026-01-05 03:24:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:26.508361 | orchestrator | 2026-01-05 03:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:29.559531 | orchestrator | 2026-01-05 03:24:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:29.561946 | orchestrator | 2026-01-05 03:24:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:29.562125 | orchestrator | 2026-01-05 03:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:32.612561 | orchestrator | 2026-01-05 03:24:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:32.615317 | orchestrator | 2026-01-05 03:24:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:32.615386 | orchestrator | 2026-01-05 03:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:35.658656 | orchestrator | 2026-01-05 03:24:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:35.659264 | orchestrator | 2026-01-05 03:24:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:35.659577 | orchestrator | 2026-01-05 03:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:38.707953 | orchestrator | 2026-01-05 03:24:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:38.711244 | orchestrator | 2026-01-05 03:24:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:38.711381 | orchestrator | 2026-01-05 03:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:41.762352 | orchestrator | 2026-01-05 03:24:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:41.762792 | orchestrator | 2026-01-05 03:24:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:41.762811 | orchestrator | 2026-01-05 03:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:44.815608 | orchestrator | 2026-01-05 03:24:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:44.817297 | orchestrator | 2026-01-05 03:24:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:44.817356 | orchestrator | 2026-01-05 03:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:47.861245 | orchestrator | 2026-01-05 03:24:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:47.862461 | orchestrator | 2026-01-05 03:24:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:47.862521 | orchestrator | 2026-01-05 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:50.913297 | orchestrator | 2026-01-05 03:24:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:50.915247 | orchestrator | 2026-01-05 03:24:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:50.915292 | orchestrator | 2026-01-05 03:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:53.966355 | orchestrator | 2026-01-05 03:24:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:53.970180 | orchestrator | 2026-01-05 03:24:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:53.970255 | orchestrator | 2026-01-05 03:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:57.023219 | orchestrator | 2026-01-05 03:24:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:24:57.025016 | orchestrator | 2026-01-05 03:24:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:24:57.025088 | orchestrator | 2026-01-05 03:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:00.074079 | orchestrator | 2026-01-05 03:25:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:00.075561 | orchestrator | 2026-01-05 03:25:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:00.075614 | orchestrator | 2026-01-05 03:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:03.130472 | orchestrator | 2026-01-05 03:25:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:03.132370 | orchestrator | 2026-01-05 03:25:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:03.132489 | orchestrator | 2026-01-05 03:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:06.181242 | orchestrator | 2026-01-05 03:25:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:06.183627 | orchestrator | 2026-01-05 03:25:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:06.183758 | orchestrator | 2026-01-05 03:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:09.232382 | orchestrator | 2026-01-05 03:25:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:09.234377 | orchestrator | 2026-01-05 03:25:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:09.234444 | orchestrator | 2026-01-05 03:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:12.276312 | orchestrator | 2026-01-05 03:25:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:12.278549 | orchestrator | 2026-01-05 03:25:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:12.278628 | orchestrator | 2026-01-05 03:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:15.327372 | orchestrator | 2026-01-05 03:25:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:15.330668 | orchestrator | 2026-01-05 03:25:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:15.330791 | orchestrator | 2026-01-05 03:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:18.384156 | orchestrator | 2026-01-05 03:25:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:18.386423 | orchestrator | 2026-01-05 03:25:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:18.386501 | orchestrator | 2026-01-05 03:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:21.440751 | orchestrator | 2026-01-05 03:25:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:21.443027 | orchestrator | 2026-01-05 03:25:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:21.443197 | orchestrator | 2026-01-05 03:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:24.494181 | orchestrator | 2026-01-05 03:25:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:24.495152 | orchestrator | 2026-01-05 03:25:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:24.495290 | orchestrator | 2026-01-05 03:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:27.545669 | orchestrator | 2026-01-05 03:25:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:27.548968 | orchestrator | 2026-01-05 03:25:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:27.549056 | orchestrator | 2026-01-05 03:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:30.595949 | orchestrator | 2026-01-05 03:25:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:30.598489 | orchestrator | 2026-01-05 03:25:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:30.598538 | orchestrator | 2026-01-05 03:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:33.655679 | orchestrator | 2026-01-05 03:25:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:33.657555 | orchestrator | 2026-01-05 03:25:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:33.657640 | orchestrator | 2026-01-05 03:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:36.709730 | orchestrator | 2026-01-05 03:25:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:36.712993 | orchestrator | 2026-01-05 03:25:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:36.713073 | orchestrator | 2026-01-05 03:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:39.760066 | orchestrator | 2026-01-05 03:25:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:39.763147 | orchestrator | 2026-01-05 03:25:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:39.763243 | orchestrator | 2026-01-05 03:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:42.815182 | orchestrator | 2026-01-05 03:25:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:42.816885 | orchestrator | 2026-01-05 03:25:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:42.816959 | orchestrator | 2026-01-05 03:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:45.867494 | orchestrator | 2026-01-05 03:25:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:45.869551 | orchestrator | 2026-01-05 03:25:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:45.869590 | orchestrator | 2026-01-05 03:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:48.912745 | orchestrator | 2026-01-05 03:25:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:48.914646 | orchestrator | 2026-01-05 03:25:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:48.914701 | orchestrator | 2026-01-05 03:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:51.960021 | orchestrator | 2026-01-05 03:25:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:51.961400 | orchestrator | 2026-01-05 03:25:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:51.961628 | orchestrator | 2026-01-05 03:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:55.006822 | orchestrator | 2026-01-05 03:25:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:55.007955 | orchestrator | 2026-01-05 03:25:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:55.008084 | orchestrator | 2026-01-05 03:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:58.057636 | orchestrator | 2026-01-05 03:25:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:25:58.059973 | orchestrator | 2026-01-05 03:25:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:25:58.060061 | orchestrator | 2026-01-05 03:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:01.103817 | orchestrator | 2026-01-05 03:26:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:01.105577 | orchestrator | 2026-01-05 03:26:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:01.105635 | orchestrator | 2026-01-05 03:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:04.150529 | orchestrator | 2026-01-05 03:26:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:04.152163 | orchestrator | 2026-01-05 03:26:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:04.152232 | orchestrator | 2026-01-05 03:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:07.202447 | orchestrator | 2026-01-05 03:26:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:07.206746 | orchestrator | 2026-01-05 03:26:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:07.206827 | orchestrator | 2026-01-05 03:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:10.259670 | orchestrator | 2026-01-05 03:26:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:10.260342 | orchestrator | 2026-01-05 03:26:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:10.260392 | orchestrator | 2026-01-05 03:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:13.311729 | orchestrator | 2026-01-05 03:26:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:13.313490 | orchestrator | 2026-01-05 03:26:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:13.313574 | orchestrator | 2026-01-05 03:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:16.367889 | orchestrator | 2026-01-05 03:26:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:16.371742 | orchestrator | 2026-01-05 03:26:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:16.371807 | orchestrator | 2026-01-05 03:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:19.423818 | orchestrator | 2026-01-05 03:26:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:19.427541 | orchestrator | 2026-01-05 03:26:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:19.427619 | orchestrator | 2026-01-05 03:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:22.478460 | orchestrator | 2026-01-05 03:26:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:22.479597 | orchestrator | 2026-01-05 03:26:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:22.479639 | orchestrator | 2026-01-05 03:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:25.531011 | orchestrator | 2026-01-05 03:26:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:25.531530 | orchestrator | 2026-01-05 03:26:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:25.531576 | orchestrator | 2026-01-05 03:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:28.580408 | orchestrator | 2026-01-05 03:26:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:28.581945 | orchestrator | 2026-01-05 03:26:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:28.582207 | orchestrator | 2026-01-05 03:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:31.636816 | orchestrator | 2026-01-05 03:26:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:31.638939 | orchestrator | 2026-01-05 03:26:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:31.639001 | orchestrator | 2026-01-05 03:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:34.689167 | orchestrator | 2026-01-05 03:26:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:34.690659 | orchestrator | 2026-01-05 03:26:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:34.690714 | orchestrator | 2026-01-05 03:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:37.740888 | orchestrator | 2026-01-05 03:26:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:37.742834 | orchestrator | 2026-01-05 03:26:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:37.742884 | orchestrator | 2026-01-05 03:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:40.795774 | orchestrator | 2026-01-05 03:26:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:40.800759 | orchestrator | 2026-01-05 03:26:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:40.800906 | orchestrator | 2026-01-05 03:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:43.847753 | orchestrator | 2026-01-05 03:26:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:43.849022 | orchestrator | 2026-01-05 03:26:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:43.849116 | orchestrator | 2026-01-05 03:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:46.903160 | orchestrator | 2026-01-05 03:26:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:46.904619 | orchestrator | 2026-01-05 03:26:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:46.904860 | orchestrator | 2026-01-05 03:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:49.957335 | orchestrator | 2026-01-05 03:26:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:49.959458 | orchestrator | 2026-01-05 03:26:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:49.959496 | orchestrator | 2026-01-05 03:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:53.010800 | orchestrator | 2026-01-05 03:26:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:53.012348 | orchestrator | 2026-01-05 03:26:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:53.012402 | orchestrator | 2026-01-05 03:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:56.061208 | orchestrator | 2026-01-05 03:26:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:56.063259 | orchestrator | 2026-01-05 03:26:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:56.063348 | orchestrator | 2026-01-05 03:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:59.102917 | orchestrator | 2026-01-05 03:26:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:26:59.106724 | orchestrator | 2026-01-05 03:26:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:26:59.106788 | orchestrator | 2026-01-05 03:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:02.150882 | orchestrator | 2026-01-05 03:27:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:02.152546 | orchestrator | 2026-01-05 03:27:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:02.152597 | orchestrator | 2026-01-05 03:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:05.203171 | orchestrator | 2026-01-05 03:27:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:05.204984 | orchestrator | 2026-01-05 03:27:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:05.205485 | orchestrator | 2026-01-05 03:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:08.251961 | orchestrator | 2026-01-05 03:27:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:08.253314 | orchestrator | 2026-01-05 03:27:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:08.253366 | orchestrator | 2026-01-05 03:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:11.302229 | orchestrator | 2026-01-05 03:27:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:11.304083 | orchestrator | 2026-01-05 03:27:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:11.304148 | orchestrator | 2026-01-05 03:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:14.351486 | orchestrator | 2026-01-05 03:27:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:14.353785 | orchestrator | 2026-01-05 03:27:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:14.353827 | orchestrator | 2026-01-05 03:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:17.404159 | orchestrator | 2026-01-05 03:27:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:17.406219 | orchestrator | 2026-01-05 03:27:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:17.406303 | orchestrator | 2026-01-05 03:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:20.458577 | orchestrator | 2026-01-05 03:27:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:20.460235 | orchestrator | 2026-01-05 03:27:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:20.460323 | orchestrator | 2026-01-05 03:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:23.506388 | orchestrator | 2026-01-05 03:27:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:23.508332 | orchestrator | 2026-01-05 03:27:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:23.508411 | orchestrator | 2026-01-05 03:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:26.556612 | orchestrator | 2026-01-05 03:27:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:26.558231 | orchestrator | 2026-01-05 03:27:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:26.558273 | orchestrator | 2026-01-05 03:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:29.604618 | orchestrator | 2026-01-05 03:27:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:29.606161 | orchestrator | 2026-01-05 03:27:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:29.606189 | orchestrator | 2026-01-05 03:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:32.656876 | orchestrator | 2026-01-05 03:27:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:32.658535 | orchestrator | 2026-01-05 03:27:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:32.658707 | orchestrator | 2026-01-05 03:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:35.713896 | orchestrator | 2026-01-05 03:27:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:35.716328 | orchestrator | 2026-01-05 03:27:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:35.716405 | orchestrator | 2026-01-05 03:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:38.760948 | orchestrator | 2026-01-05 03:27:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:38.763561 | orchestrator | 2026-01-05 03:27:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:38.763620 | orchestrator | 2026-01-05 03:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:41.806359 | orchestrator | 2026-01-05 03:27:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:41.807692 | orchestrator | 2026-01-05 03:27:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:41.807750 | orchestrator | 2026-01-05 03:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:44.853997 | orchestrator | 2026-01-05 03:27:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:44.854473 | orchestrator | 2026-01-05 03:27:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:44.854810 | orchestrator | 2026-01-05 03:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:47.897726 | orchestrator | 2026-01-05 03:27:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:47.899199 | orchestrator | 2026-01-05 03:27:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:47.899256 | orchestrator | 2026-01-05 03:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:50.947233 | orchestrator | 2026-01-05 03:27:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:50.949537 | orchestrator | 2026-01-05 03:27:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:50.949638 | orchestrator | 2026-01-05 03:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:53.996758 | orchestrator | 2026-01-05 03:27:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:53.998575 | orchestrator | 2026-01-05 03:27:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:53.998600 | orchestrator | 2026-01-05 03:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:57.051698 | orchestrator | 2026-01-05 03:27:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:27:57.054783 | orchestrator | 2026-01-05 03:27:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:27:57.054839 | orchestrator | 2026-01-05 03:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:00.107540 | orchestrator | 2026-01-05 03:28:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:00.109963 | orchestrator | 2026-01-05 03:28:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:00.110147 | orchestrator | 2026-01-05 03:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:03.151115 | orchestrator | 2026-01-05 03:28:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:03.152565 | orchestrator | 2026-01-05 03:28:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:03.152727 | orchestrator | 2026-01-05 03:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:06.203966 | orchestrator | 2026-01-05 03:28:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:06.206198 | orchestrator | 2026-01-05 03:28:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:06.206235 | orchestrator | 2026-01-05 03:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:09.257985 | orchestrator | 2026-01-05 03:28:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:09.259472 | orchestrator | 2026-01-05 03:28:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:09.259521 | orchestrator | 2026-01-05 03:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:12.306435 | orchestrator | 2026-01-05 03:28:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:12.307324 | orchestrator | 2026-01-05 03:28:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:12.307481 | orchestrator | 2026-01-05 03:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:15.350995 | orchestrator | 2026-01-05 03:28:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:15.352566 | orchestrator | 2026-01-05 03:28:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:15.352644 | orchestrator | 2026-01-05 03:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:18.399326 | orchestrator | 2026-01-05 03:28:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:18.401621 | orchestrator | 2026-01-05 03:28:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:18.401757 | orchestrator | 2026-01-05 03:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:21.446197 | orchestrator | 2026-01-05 03:28:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:21.446460 | orchestrator | 2026-01-05 03:28:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:21.446497 | orchestrator | 2026-01-05 03:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:24.498117 | orchestrator | 2026-01-05 03:28:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:24.501331 | orchestrator | 2026-01-05 03:28:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:24.501387 | orchestrator | 2026-01-05 03:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:27.549722 | orchestrator | 2026-01-05 03:28:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:27.551615 | orchestrator | 2026-01-05 03:28:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:27.551675 | orchestrator | 2026-01-05 03:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:30.601536 | orchestrator | 2026-01-05 03:28:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:30.602932 | orchestrator | 2026-01-05 03:28:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:30.602984 | orchestrator | 2026-01-05 03:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:33.647095 | orchestrator | 2026-01-05 03:28:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:33.648828 | orchestrator | 2026-01-05 03:28:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:33.648891 | orchestrator | 2026-01-05 03:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:36.691901 | orchestrator | 2026-01-05 03:28:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:36.693356 | orchestrator | 2026-01-05 03:28:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:36.693459 | orchestrator | 2026-01-05 03:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:39.746439 | orchestrator | 2026-01-05 03:28:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:39.748145 | orchestrator | 2026-01-05 03:28:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:39.748331 | orchestrator | 2026-01-05 03:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:42.803873 | orchestrator | 2026-01-05 03:28:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:42.806693 | orchestrator | 2026-01-05 03:28:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:42.807183 | orchestrator | 2026-01-05 03:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:45.860056 | orchestrator | 2026-01-05 03:28:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:45.861068 | orchestrator | 2026-01-05 03:28:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:45.861098 | orchestrator | 2026-01-05 03:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:48.912469 | orchestrator | 2026-01-05 03:28:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:48.914465 | orchestrator | 2026-01-05 03:28:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:48.914548 | orchestrator | 2026-01-05 03:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:51.963973 | orchestrator | 2026-01-05 03:28:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:51.966631 | orchestrator | 2026-01-05 03:28:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:51.966736 | orchestrator | 2026-01-05 03:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:55.016304 | orchestrator | 2026-01-05 03:28:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:55.018763 | orchestrator | 2026-01-05 03:28:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:55.018841 | orchestrator | 2026-01-05 03:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:58.060748 | orchestrator | 2026-01-05 03:28:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:28:58.061459 | orchestrator | 2026-01-05 03:28:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:28:58.061509 | orchestrator | 2026-01-05 03:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:01.103885 | orchestrator | 2026-01-05 03:29:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:01.104944 | orchestrator | 2026-01-05 03:29:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:01.104977 | orchestrator | 2026-01-05 03:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:04.152610 | orchestrator | 2026-01-05 03:29:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:04.155104 | orchestrator | 2026-01-05 03:29:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:04.155164 | orchestrator | 2026-01-05 03:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:07.211452 | orchestrator | 2026-01-05 03:29:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:07.211672 | orchestrator | 2026-01-05 03:29:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:07.211703 | orchestrator | 2026-01-05 03:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:10.259807 | orchestrator | 2026-01-05 03:29:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:10.261886 | orchestrator | 2026-01-05 03:29:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:10.261962 | orchestrator | 2026-01-05 03:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:13.306482 | orchestrator | 2026-01-05 03:29:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:13.306839 | orchestrator | 2026-01-05 03:29:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:13.306868 | orchestrator | 2026-01-05 03:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:16.360150 | orchestrator | 2026-01-05 03:29:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:16.361109 | orchestrator | 2026-01-05 03:29:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:16.361152 | orchestrator | 2026-01-05 03:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:19.413473 | orchestrator | 2026-01-05 03:29:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:19.414966 | orchestrator | 2026-01-05 03:29:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:19.415063 | orchestrator | 2026-01-05 03:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:22.462747 | orchestrator | 2026-01-05 03:29:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:22.464756 | orchestrator | 2026-01-05 03:29:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:22.464817 | orchestrator | 2026-01-05 03:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:25.509312 | orchestrator | 2026-01-05 03:29:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:25.510310 | orchestrator | 2026-01-05 03:29:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:25.510359 | orchestrator | 2026-01-05 03:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:28.552283 | orchestrator | 2026-01-05 03:29:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:28.553380 | orchestrator | 2026-01-05 03:29:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:28.553525 | orchestrator | 2026-01-05 03:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:31.597836 | orchestrator | 2026-01-05 03:29:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:31.599502 | orchestrator | 2026-01-05 03:29:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:31.599550 | orchestrator | 2026-01-05 03:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:34.648116 | orchestrator | 2026-01-05 03:29:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:34.650496 | orchestrator | 2026-01-05 03:29:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:34.650549 | orchestrator | 2026-01-05 03:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:37.697425 | orchestrator | 2026-01-05 03:29:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:37.699462 | orchestrator | 2026-01-05 03:29:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:37.699543 | orchestrator | 2026-01-05 03:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:40.749928 | orchestrator | 2026-01-05 03:29:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:40.751223 | orchestrator | 2026-01-05 03:29:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:40.751299 | orchestrator | 2026-01-05 03:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:43.801282 | orchestrator | 2026-01-05 03:29:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:43.803335 | orchestrator | 2026-01-05 03:29:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:43.803380 | orchestrator | 2026-01-05 03:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:46.848235 | orchestrator | 2026-01-05 03:29:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:46.849326 | orchestrator | 2026-01-05 03:29:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:46.849534 | orchestrator | 2026-01-05 03:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:49.897563 | orchestrator | 2026-01-05 03:29:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:49.899399 | orchestrator | 2026-01-05 03:29:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:49.899481 | orchestrator | 2026-01-05 03:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:52.945016 | orchestrator | 2026-01-05 03:29:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:52.945515 | orchestrator | 2026-01-05 03:29:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:52.945539 | orchestrator | 2026-01-05 03:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:55.992873 | orchestrator | 2026-01-05 03:29:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:55.994586 | orchestrator | 2026-01-05 03:29:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:55.994635 | orchestrator | 2026-01-05 03:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:59.042354 | orchestrator | 2026-01-05 03:29:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:29:59.045065 | orchestrator | 2026-01-05 03:29:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:29:59.045145 | orchestrator | 2026-01-05 03:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:02.098534 | orchestrator | 2026-01-05 03:30:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:02.098721 | orchestrator | 2026-01-05 03:30:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:02.098741 | orchestrator | 2026-01-05 03:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:05.144144 | orchestrator | 2026-01-05 03:30:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:05.145324 | orchestrator | 2026-01-05 03:30:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:05.145366 | orchestrator | 2026-01-05 03:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:08.191879 | orchestrator | 2026-01-05 03:30:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:08.193470 | orchestrator | 2026-01-05 03:30:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:08.193654 | orchestrator | 2026-01-05 03:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:11.251168 | orchestrator | 2026-01-05 03:30:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:11.253171 | orchestrator | 2026-01-05 03:30:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:11.253271 | orchestrator | 2026-01-05 03:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:14.309822 | orchestrator | 2026-01-05 03:30:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:14.311545 | orchestrator | 2026-01-05 03:30:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:14.311694 | orchestrator | 2026-01-05 03:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:17.367183 | orchestrator | 2026-01-05 03:30:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:17.368194 | orchestrator | 2026-01-05 03:30:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:17.368422 | orchestrator | 2026-01-05 03:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:20.418420 | orchestrator | 2026-01-05 03:30:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:20.420055 | orchestrator | 2026-01-05 03:30:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:20.420096 | orchestrator | 2026-01-05 03:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:23.477843 | orchestrator | 2026-01-05 03:30:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:23.480187 | orchestrator | 2026-01-05 03:30:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:23.480253 | orchestrator | 2026-01-05 03:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:26.529484 | orchestrator | 2026-01-05 03:30:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:26.531341 | orchestrator | 2026-01-05 03:30:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:26.531518 | orchestrator | 2026-01-05 03:30:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:29.584935 | orchestrator | 2026-01-05 03:30:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:29.587270 | orchestrator | 2026-01-05 03:30:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:29.587321 | orchestrator | 2026-01-05 03:30:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:32.647209 | orchestrator | 2026-01-05 03:30:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:32.649350 | orchestrator | 2026-01-05 03:30:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:32.649384 | orchestrator | 2026-01-05 03:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:35.698618 | orchestrator | 2026-01-05 03:30:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:35.700372 | orchestrator | 2026-01-05 03:30:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:35.700476 | orchestrator | 2026-01-05 03:30:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:38.751203 | orchestrator | 2026-01-05 03:30:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:38.752173 | orchestrator | 2026-01-05 03:30:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:38.752220 | orchestrator | 2026-01-05 03:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:41.804735 | orchestrator | 2026-01-05 03:30:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:41.806386 | orchestrator | 2026-01-05 03:30:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:41.806469 | orchestrator | 2026-01-05 03:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:44.854256 | orchestrator | 2026-01-05 03:30:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:44.856910 | orchestrator | 2026-01-05 03:30:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:44.857092 | orchestrator | 2026-01-05 03:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:47.910089 | orchestrator | 2026-01-05 03:30:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:47.912940 | orchestrator | 2026-01-05 03:30:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:47.913106 | orchestrator | 2026-01-05 03:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:50.965917 | orchestrator | 2026-01-05 03:30:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:50.969387 | orchestrator | 2026-01-05 03:30:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:50.969467 | orchestrator | 2026-01-05 03:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:54.030205 | orchestrator | 2026-01-05 03:30:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:54.031986 | orchestrator | 2026-01-05 03:30:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:54.032031 | orchestrator | 2026-01-05 03:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:57.085272 | orchestrator | 2026-01-05 03:30:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:30:57.085991 | orchestrator | 2026-01-05 03:30:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:30:57.086192 | orchestrator | 2026-01-05 03:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:00.131806 | orchestrator | 2026-01-05 03:31:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:00.134125 | orchestrator | 2026-01-05 03:31:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:00.134164 | orchestrator | 2026-01-05 03:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:03.176847 | orchestrator | 2026-01-05 03:31:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:03.177521 | orchestrator | 2026-01-05 03:31:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:03.177601 | orchestrator | 2026-01-05 03:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:06.230669 | orchestrator | 2026-01-05 03:31:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:06.232688 | orchestrator | 2026-01-05 03:31:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:06.232740 | orchestrator | 2026-01-05 03:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:09.285092 | orchestrator | 2026-01-05 03:31:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:09.285862 | orchestrator | 2026-01-05 03:31:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:09.285895 | orchestrator | 2026-01-05 03:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:12.337169 | orchestrator | 2026-01-05 03:31:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:12.338352 | orchestrator | 2026-01-05 03:31:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:12.338394 | orchestrator | 2026-01-05 03:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:15.386467 | orchestrator | 2026-01-05 03:31:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:15.388030 | orchestrator | 2026-01-05 03:31:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:15.388091 | orchestrator | 2026-01-05 03:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:18.434742 | orchestrator | 2026-01-05 03:31:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:18.436009 | orchestrator | 2026-01-05 03:31:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:18.436043 | orchestrator | 2026-01-05 03:31:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:21.477145 | orchestrator | 2026-01-05 03:31:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:21.478166 | orchestrator | 2026-01-05 03:31:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:21.478215 | orchestrator | 2026-01-05 03:31:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:24.525531 | orchestrator | 2026-01-05 03:31:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:24.527733 | orchestrator | 2026-01-05 03:31:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:24.528107 | orchestrator | 2026-01-05 03:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:27.570570 | orchestrator | 2026-01-05 03:31:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:27.572345 | orchestrator | 2026-01-05 03:31:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:27.572528 | orchestrator | 2026-01-05 03:31:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:30.623990 | orchestrator | 2026-01-05 03:31:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:30.624847 | orchestrator | 2026-01-05 03:31:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:30.625004 | orchestrator | 2026-01-05 03:31:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:33.677028 | orchestrator | 2026-01-05 03:31:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:33.677856 | orchestrator | 2026-01-05 03:31:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:33.677893 | orchestrator | 2026-01-05 03:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:36.728206 | orchestrator | 2026-01-05 03:31:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:36.729913 | orchestrator | 2026-01-05 03:31:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:36.729968 | orchestrator | 2026-01-05 03:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:39.774182 | orchestrator | 2026-01-05 03:31:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:39.775917 | orchestrator | 2026-01-05 03:31:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:39.775991 | orchestrator | 2026-01-05 03:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:42.812537 | orchestrator | 2026-01-05 03:31:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:42.812779 | orchestrator | 2026-01-05 03:31:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:42.812806 | orchestrator | 2026-01-05 03:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:45.862008 | orchestrator | 2026-01-05 03:31:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:45.863636 | orchestrator | 2026-01-05 03:31:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:45.863683 | orchestrator | 2026-01-05 03:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:48.921078 | orchestrator | 2026-01-05 03:31:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:48.923167 | orchestrator | 2026-01-05 03:31:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:49.192069 | orchestrator | 2026-01-05 03:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:51.972109 | orchestrator | 2026-01-05 03:31:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:51.974229 | orchestrator | 2026-01-05 03:31:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:51.974531 | orchestrator | 2026-01-05 03:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:55.018717 | orchestrator | 2026-01-05 03:31:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:55.021754 | orchestrator | 2026-01-05 03:31:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:55.021853 | orchestrator | 2026-01-05 03:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:58.071883 | orchestrator | 2026-01-05 03:31:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:31:58.072037 | orchestrator | 2026-01-05 03:31:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:31:58.072053 | orchestrator | 2026-01-05 03:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:01.124020 | orchestrator | 2026-01-05 03:32:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:01.125496 | orchestrator | 2026-01-05 03:32:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:01.125554 | orchestrator | 2026-01-05 03:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:04.182082 | orchestrator | 2026-01-05 03:32:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:04.183250 | orchestrator | 2026-01-05 03:32:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:04.183300 | orchestrator | 2026-01-05 03:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:07.230435 | orchestrator | 2026-01-05 03:32:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:07.232008 | orchestrator | 2026-01-05 03:32:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:07.232050 | orchestrator | 2026-01-05 03:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:10.275899 | orchestrator | 2026-01-05 03:32:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:10.279432 | orchestrator | 2026-01-05 03:32:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:10.279540 | orchestrator | 2026-01-05 03:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:13.335382 | orchestrator | 2026-01-05 03:32:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:13.337046 | orchestrator | 2026-01-05 03:32:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:13.337118 | orchestrator | 2026-01-05 03:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:16.383330 | orchestrator | 2026-01-05 03:32:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:16.384996 | orchestrator | 2026-01-05 03:32:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:16.385066 | orchestrator | 2026-01-05 03:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:19.429502 | orchestrator | 2026-01-05 03:32:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:19.431836 | orchestrator | 2026-01-05 03:32:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:19.432005 | orchestrator | 2026-01-05 03:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:22.485851 | orchestrator | 2026-01-05 03:32:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:22.486596 | orchestrator | 2026-01-05 03:32:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:22.486634 | orchestrator | 2026-01-05 03:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:25.539218 | orchestrator | 2026-01-05 03:32:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:25.540863 | orchestrator | 2026-01-05 03:32:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:25.540906 | orchestrator | 2026-01-05 03:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:28.588534 | orchestrator | 2026-01-05 03:32:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:28.592222 | orchestrator | 2026-01-05 03:32:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:28.592304 | orchestrator | 2026-01-05 03:32:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:31.631891 | orchestrator | 2026-01-05 03:32:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:31.633420 | orchestrator | 2026-01-05 03:32:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:31.633467 | orchestrator | 2026-01-05 03:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:34.689497 | orchestrator | 2026-01-05 03:32:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:34.691317 | orchestrator | 2026-01-05 03:32:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:34.691396 | orchestrator | 2026-01-05 03:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:37.742422 | orchestrator | 2026-01-05 03:32:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:37.746382 | orchestrator | 2026-01-05 03:32:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:37.746474 | orchestrator | 2026-01-05 03:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:40.797220 | orchestrator | 2026-01-05 03:32:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:40.799885 | orchestrator | 2026-01-05 03:32:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:40.800005 | orchestrator | 2026-01-05 03:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:43.849260 | orchestrator | 2026-01-05 03:32:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:43.854203 | orchestrator | 2026-01-05 03:32:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:43.854257 | orchestrator | 2026-01-05 03:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:46.914173 | orchestrator | 2026-01-05 03:32:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:46.915882 | orchestrator | 2026-01-05 03:32:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:46.915970 | orchestrator | 2026-01-05 03:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:49.963507 | orchestrator | 2026-01-05 03:32:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:49.965083 | orchestrator | 2026-01-05 03:32:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:49.965216 | orchestrator | 2026-01-05 03:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:53.015089 | orchestrator | 2026-01-05 03:32:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:53.016100 | orchestrator | 2026-01-05 03:32:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:53.016136 | orchestrator | 2026-01-05 03:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:56.063731 | orchestrator | 2026-01-05 03:32:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:56.066955 | orchestrator | 2026-01-05 03:32:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:56.067069 | orchestrator | 2026-01-05 03:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:59.113268 | orchestrator | 2026-01-05 03:32:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:32:59.115830 | orchestrator | 2026-01-05 03:32:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:32:59.116133 | orchestrator | 2026-01-05 03:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:02.162993 | orchestrator | 2026-01-05 03:33:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:02.164219 | orchestrator | 2026-01-05 03:33:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:02.164530 | orchestrator | 2026-01-05 03:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:05.210534 | orchestrator | 2026-01-05 03:33:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:05.212429 | orchestrator | 2026-01-05 03:33:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:05.212495 | orchestrator | 2026-01-05 03:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:08.259291 | orchestrator | 2026-01-05 03:33:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:08.260936 | orchestrator | 2026-01-05 03:33:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:08.261006 | orchestrator | 2026-01-05 03:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:11.301610 | orchestrator | 2026-01-05 03:33:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:11.303346 | orchestrator | 2026-01-05 03:33:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:11.303393 | orchestrator | 2026-01-05 03:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:14.355840 | orchestrator | 2026-01-05 03:33:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:14.358091 | orchestrator | 2026-01-05 03:33:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:14.358196 | orchestrator | 2026-01-05 03:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:17.405450 | orchestrator | 2026-01-05 03:33:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:17.407387 | orchestrator | 2026-01-05 03:33:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:17.407442 | orchestrator | 2026-01-05 03:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:20.453517 | orchestrator | 2026-01-05 03:33:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:20.454765 | orchestrator | 2026-01-05 03:33:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:20.454796 | orchestrator | 2026-01-05 03:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:23.506770 | orchestrator | 2026-01-05 03:33:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:23.507772 | orchestrator | 2026-01-05 03:33:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:23.507871 | orchestrator | 2026-01-05 03:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:26.547624 | orchestrator | 2026-01-05 03:33:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:26.548504 | orchestrator | 2026-01-05 03:33:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:26.548566 | orchestrator | 2026-01-05 03:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:29.593538 | orchestrator | 2026-01-05 03:33:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:29.596138 | orchestrator | 2026-01-05 03:33:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:29.596187 | orchestrator | 2026-01-05 03:33:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:32.639287 | orchestrator | 2026-01-05 03:33:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:32.641208 | orchestrator | 2026-01-05 03:33:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:32.641264 | orchestrator | 2026-01-05 03:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:35.694823 | orchestrator | 2026-01-05 03:33:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:35.697198 | orchestrator | 2026-01-05 03:33:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:35.697242 | orchestrator | 2026-01-05 03:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:38.747265 | orchestrator | 2026-01-05 03:33:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:38.749069 | orchestrator | 2026-01-05 03:33:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:38.749105 | orchestrator | 2026-01-05 03:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:41.789051 | orchestrator | 2026-01-05 03:33:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:41.789690 | orchestrator | 2026-01-05 03:33:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:41.789784 | orchestrator | 2026-01-05 03:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:44.844577 | orchestrator | 2026-01-05 03:33:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:44.846356 | orchestrator | 2026-01-05 03:33:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:44.846429 | orchestrator | 2026-01-05 03:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:47.896171 | orchestrator | 2026-01-05 03:33:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:47.898605 | orchestrator | 2026-01-05 03:33:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:47.898699 | orchestrator | 2026-01-05 03:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:50.945660 | orchestrator | 2026-01-05 03:33:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:50.946862 | orchestrator | 2026-01-05 03:33:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:50.946973 | orchestrator | 2026-01-05 03:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:53.993883 | orchestrator | 2026-01-05 03:33:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:53.996041 | orchestrator | 2026-01-05 03:33:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:53.996088 | orchestrator | 2026-01-05 03:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:57.044629 | orchestrator | 2026-01-05 03:33:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:33:57.046744 | orchestrator | 2026-01-05 03:33:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:33:57.046789 | orchestrator | 2026-01-05 03:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:00.113189 | orchestrator | 2026-01-05 03:34:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:00.115006 | orchestrator | 2026-01-05 03:34:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:00.115052 | orchestrator | 2026-01-05 03:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:03.165727 | orchestrator | 2026-01-05 03:34:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:03.166424 | orchestrator | 2026-01-05 03:34:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:03.166518 | orchestrator | 2026-01-05 03:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:06.217454 | orchestrator | 2026-01-05 03:34:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:06.218967 | orchestrator | 2026-01-05 03:34:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:06.219010 | orchestrator | 2026-01-05 03:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:09.269559 | orchestrator | 2026-01-05 03:34:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:09.270802 | orchestrator | 2026-01-05 03:34:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:09.270840 | orchestrator | 2026-01-05 03:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:12.322536 | orchestrator | 2026-01-05 03:34:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:12.325634 | orchestrator | 2026-01-05 03:34:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:12.325705 | orchestrator | 2026-01-05 03:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:15.376065 | orchestrator | 2026-01-05 03:34:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:15.377420 | orchestrator | 2026-01-05 03:34:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:15.377573 | orchestrator | 2026-01-05 03:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:18.433649 | orchestrator | 2026-01-05 03:34:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:18.435705 | orchestrator | 2026-01-05 03:34:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:18.435782 | orchestrator | 2026-01-05 03:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:21.488875 | orchestrator | 2026-01-05 03:34:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:21.491450 | orchestrator | 2026-01-05 03:34:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:21.491542 | orchestrator | 2026-01-05 03:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:24.543085 | orchestrator | 2026-01-05 03:34:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:24.544281 | orchestrator | 2026-01-05 03:34:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:24.544366 | orchestrator | 2026-01-05 03:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:27.589547 | orchestrator | 2026-01-05 03:34:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:27.590901 | orchestrator | 2026-01-05 03:34:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:27.590938 | orchestrator | 2026-01-05 03:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:30.648769 | orchestrator | 2026-01-05 03:34:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:30.650707 | orchestrator | 2026-01-05 03:34:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:30.650768 | orchestrator | 2026-01-05 03:34:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:33.705639 | orchestrator | 2026-01-05 03:34:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:33.705757 | orchestrator | 2026-01-05 03:34:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:33.705781 | orchestrator | 2026-01-05 03:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:36.757628 | orchestrator | 2026-01-05 03:34:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:36.760262 | orchestrator | 2026-01-05 03:34:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:36.760517 | orchestrator | 2026-01-05 03:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:39.814389 | orchestrator | 2026-01-05 03:34:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:39.818221 | orchestrator | 2026-01-05 03:34:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:39.818285 | orchestrator | 2026-01-05 03:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:42.873597 | orchestrator | 2026-01-05 03:34:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:42.875404 | orchestrator | 2026-01-05 03:34:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:42.875454 | orchestrator | 2026-01-05 03:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:45.919324 | orchestrator | 2026-01-05 03:34:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:45.921615 | orchestrator | 2026-01-05 03:34:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:45.921696 | orchestrator | 2026-01-05 03:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:48.971466 | orchestrator | 2026-01-05 03:34:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:48.973728 | orchestrator | 2026-01-05 03:34:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:48.973879 | orchestrator | 2026-01-05 03:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:52.030881 | orchestrator | 2026-01-05 03:34:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:52.035125 | orchestrator | 2026-01-05 03:34:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:52.035197 | orchestrator | 2026-01-05 03:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:55.084902 | orchestrator | 2026-01-05 03:34:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:55.085980 | orchestrator | 2026-01-05 03:34:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:55.086120 | orchestrator | 2026-01-05 03:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:58.131566 | orchestrator | 2026-01-05 03:34:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:34:58.133104 | orchestrator | 2026-01-05 03:34:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:34:58.133152 | orchestrator | 2026-01-05 03:34:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:01.190269 | orchestrator | 2026-01-05 03:35:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:01.191320 | orchestrator | 2026-01-05 03:35:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:01.191357 | orchestrator | 2026-01-05 03:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:04.235712 | orchestrator | 2026-01-05 03:35:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:04.237828 | orchestrator | 2026-01-05 03:35:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:04.237873 | orchestrator | 2026-01-05 03:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:07.280672 | orchestrator | 2026-01-05 03:35:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:07.282004 | orchestrator | 2026-01-05 03:35:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:07.282092 | orchestrator | 2026-01-05 03:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:10.328335 | orchestrator | 2026-01-05 03:35:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:10.329114 | orchestrator | 2026-01-05 03:35:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:10.329241 | orchestrator | 2026-01-05 03:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:13.384726 | orchestrator | 2026-01-05 03:35:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:13.385235 | orchestrator | 2026-01-05 03:35:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:13.385680 | orchestrator | 2026-01-05 03:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:16.440256 | orchestrator | 2026-01-05 03:35:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:16.441415 | orchestrator | 2026-01-05 03:35:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:16.441502 | orchestrator | 2026-01-05 03:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:19.498119 | orchestrator | 2026-01-05 03:35:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:19.501054 | orchestrator | 2026-01-05 03:35:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:19.501108 | orchestrator | 2026-01-05 03:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:22.553393 | orchestrator | 2026-01-05 03:35:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:22.554578 | orchestrator | 2026-01-05 03:35:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:22.554636 | orchestrator | 2026-01-05 03:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:25.605880 | orchestrator | 2026-01-05 03:35:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:25.609285 | orchestrator | 2026-01-05 03:35:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:25.609369 | orchestrator | 2026-01-05 03:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:28.654559 | orchestrator | 2026-01-05 03:35:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:28.656214 | orchestrator | 2026-01-05 03:35:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:28.656263 | orchestrator | 2026-01-05 03:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:31.710382 | orchestrator | 2026-01-05 03:35:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:31.711565 | orchestrator | 2026-01-05 03:35:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:31.711625 | orchestrator | 2026-01-05 03:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:34.765921 | orchestrator | 2026-01-05 03:35:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:34.768272 | orchestrator | 2026-01-05 03:35:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:34.768319 | orchestrator | 2026-01-05 03:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:37.824430 | orchestrator | 2026-01-05 03:35:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:37.826915 | orchestrator | 2026-01-05 03:35:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:37.827193 | orchestrator | 2026-01-05 03:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:40.878352 | orchestrator | 2026-01-05 03:35:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:40.880395 | orchestrator | 2026-01-05 03:35:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:40.880499 | orchestrator | 2026-01-05 03:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:43.936145 | orchestrator | 2026-01-05 03:35:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:43.938694 | orchestrator | 2026-01-05 03:35:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:43.938729 | orchestrator | 2026-01-05 03:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:46.993628 | orchestrator | 2026-01-05 03:35:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:46.995160 | orchestrator | 2026-01-05 03:35:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:46.995213 | orchestrator | 2026-01-05 03:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:50.084628 | orchestrator | 2026-01-05 03:35:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:50.085189 | orchestrator | 2026-01-05 03:35:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:50.085229 | orchestrator | 2026-01-05 03:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:53.131764 | orchestrator | 2026-01-05 03:35:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:53.132890 | orchestrator | 2026-01-05 03:35:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:53.132982 | orchestrator | 2026-01-05 03:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:56.192435 | orchestrator | 2026-01-05 03:35:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:56.193970 | orchestrator | 2026-01-05 03:35:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:56.194059 | orchestrator | 2026-01-05 03:35:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:59.250875 | orchestrator | 2026-01-05 03:35:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:35:59.253115 | orchestrator | 2026-01-05 03:35:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:35:59.253244 | orchestrator | 2026-01-05 03:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:02.312849 | orchestrator | 2026-01-05 03:36:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:02.315060 | orchestrator | 2026-01-05 03:36:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:02.315113 | orchestrator | 2026-01-05 03:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:05.364675 | orchestrator | 2026-01-05 03:36:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:05.366268 | orchestrator | 2026-01-05 03:36:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:05.366357 | orchestrator | 2026-01-05 03:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:08.417717 | orchestrator | 2026-01-05 03:36:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:08.420943 | orchestrator | 2026-01-05 03:36:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:08.421039 | orchestrator | 2026-01-05 03:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:11.480337 | orchestrator | 2026-01-05 03:36:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:11.482574 | orchestrator | 2026-01-05 03:36:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:11.482619 | orchestrator | 2026-01-05 03:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:14.533402 | orchestrator | 2026-01-05 03:36:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:14.535759 | orchestrator | 2026-01-05 03:36:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:14.535826 | orchestrator | 2026-01-05 03:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:17.589975 | orchestrator | 2026-01-05 03:36:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:17.592933 | orchestrator | 2026-01-05 03:36:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:17.593003 | orchestrator | 2026-01-05 03:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:20.649159 | orchestrator | 2026-01-05 03:36:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:20.651296 | orchestrator | 2026-01-05 03:36:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:20.651341 | orchestrator | 2026-01-05 03:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:23.706064 | orchestrator | 2026-01-05 03:36:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:23.707769 | orchestrator | 2026-01-05 03:36:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:23.707806 | orchestrator | 2026-01-05 03:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:26.762509 | orchestrator | 2026-01-05 03:36:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:26.764960 | orchestrator | 2026-01-05 03:36:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:26.765015 | orchestrator | 2026-01-05 03:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:29.823024 | orchestrator | 2026-01-05 03:36:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:29.824291 | orchestrator | 2026-01-05 03:36:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:29.824341 | orchestrator | 2026-01-05 03:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:32.878399 | orchestrator | 2026-01-05 03:36:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:32.879921 | orchestrator | 2026-01-05 03:36:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:32.879982 | orchestrator | 2026-01-05 03:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:35.930293 | orchestrator | 2026-01-05 03:36:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:35.932407 | orchestrator | 2026-01-05 03:36:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:35.932467 | orchestrator | 2026-01-05 03:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:38.980612 | orchestrator | 2026-01-05 03:36:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:38.983742 | orchestrator | 2026-01-05 03:36:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:38.983807 | orchestrator | 2026-01-05 03:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:42.037418 | orchestrator | 2026-01-05 03:36:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:42.039542 | orchestrator | 2026-01-05 03:36:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:42.039601 | orchestrator | 2026-01-05 03:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:45.087342 | orchestrator | 2026-01-05 03:36:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:45.088908 | orchestrator | 2026-01-05 03:36:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:45.089025 | orchestrator | 2026-01-05 03:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:48.139984 | orchestrator | 2026-01-05 03:36:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:48.140602 | orchestrator | 2026-01-05 03:36:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:48.140629 | orchestrator | 2026-01-05 03:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:51.191407 | orchestrator | 2026-01-05 03:36:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:51.193253 | orchestrator | 2026-01-05 03:36:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:51.193608 | orchestrator | 2026-01-05 03:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:54.236833 | orchestrator | 2026-01-05 03:36:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:54.238255 | orchestrator | 2026-01-05 03:36:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:54.238327 | orchestrator | 2026-01-05 03:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:57.286754 | orchestrator | 2026-01-05 03:36:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:36:57.288055 | orchestrator | 2026-01-05 03:36:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:36:57.288198 | orchestrator | 2026-01-05 03:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:00.335085 | orchestrator | 2026-01-05 03:37:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:00.335215 | orchestrator | 2026-01-05 03:37:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:00.335241 | orchestrator | 2026-01-05 03:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:03.384933 | orchestrator | 2026-01-05 03:37:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:03.386611 | orchestrator | 2026-01-05 03:37:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:03.386850 | orchestrator | 2026-01-05 03:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:06.435992 | orchestrator | 2026-01-05 03:37:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:06.436305 | orchestrator | 2026-01-05 03:37:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:06.437297 | orchestrator | 2026-01-05 03:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:09.484816 | orchestrator | 2026-01-05 03:37:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:09.485887 | orchestrator | 2026-01-05 03:37:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:09.485963 | orchestrator | 2026-01-05 03:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:12.537510 | orchestrator | 2026-01-05 03:37:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:12.539501 | orchestrator | 2026-01-05 03:37:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:12.539600 | orchestrator | 2026-01-05 03:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:15.590120 | orchestrator | 2026-01-05 03:37:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:15.592072 | orchestrator | 2026-01-05 03:37:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:15.592310 | orchestrator | 2026-01-05 03:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:18.637488 | orchestrator | 2026-01-05 03:37:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:18.640324 | orchestrator | 2026-01-05 03:37:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:18.640497 | orchestrator | 2026-01-05 03:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:21.692497 | orchestrator | 2026-01-05 03:37:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:21.693271 | orchestrator | 2026-01-05 03:37:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:21.693301 | orchestrator | 2026-01-05 03:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:24.736160 | orchestrator | 2026-01-05 03:37:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:24.737821 | orchestrator | 2026-01-05 03:37:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:24.737872 | orchestrator | 2026-01-05 03:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:27.783311 | orchestrator | 2026-01-05 03:37:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:27.784063 | orchestrator | 2026-01-05 03:37:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:27.784212 | orchestrator | 2026-01-05 03:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:30.833598 | orchestrator | 2026-01-05 03:37:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:30.835820 | orchestrator | 2026-01-05 03:37:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:30.835899 | orchestrator | 2026-01-05 03:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:33.894409 | orchestrator | 2026-01-05 03:37:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:33.896046 | orchestrator | 2026-01-05 03:37:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:33.896080 | orchestrator | 2026-01-05 03:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:36.946241 | orchestrator | 2026-01-05 03:37:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:36.946659 | orchestrator | 2026-01-05 03:37:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:36.946676 | orchestrator | 2026-01-05 03:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:39.997260 | orchestrator | 2026-01-05 03:37:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:39.999176 | orchestrator | 2026-01-05 03:37:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:39.999258 | orchestrator | 2026-01-05 03:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:43.047750 | orchestrator | 2026-01-05 03:37:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:43.049249 | orchestrator | 2026-01-05 03:37:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:43.049290 | orchestrator | 2026-01-05 03:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:46.102485 | orchestrator | 2026-01-05 03:37:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:46.104934 | orchestrator | 2026-01-05 03:37:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:46.104991 | orchestrator | 2026-01-05 03:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:49.149069 | orchestrator | 2026-01-05 03:37:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:49.151610 | orchestrator | 2026-01-05 03:37:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:49.151694 | orchestrator | 2026-01-05 03:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:52.199509 | orchestrator | 2026-01-05 03:37:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:52.200921 | orchestrator | 2026-01-05 03:37:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:52.200960 | orchestrator | 2026-01-05 03:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:55.249754 | orchestrator | 2026-01-05 03:37:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:55.250423 | orchestrator | 2026-01-05 03:37:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:55.250507 | orchestrator | 2026-01-05 03:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:58.306363 | orchestrator | 2026-01-05 03:37:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:37:58.309592 | orchestrator | 2026-01-05 03:37:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:37:58.309875 | orchestrator | 2026-01-05 03:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:01.366336 | orchestrator | 2026-01-05 03:38:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:01.368218 | orchestrator | 2026-01-05 03:38:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:01.368358 | orchestrator | 2026-01-05 03:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:04.414377 | orchestrator | 2026-01-05 03:38:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:04.415905 | orchestrator | 2026-01-05 03:38:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:04.415951 | orchestrator | 2026-01-05 03:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:07.467556 | orchestrator | 2026-01-05 03:38:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:07.470829 | orchestrator | 2026-01-05 03:38:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:07.470905 | orchestrator | 2026-01-05 03:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:10.519624 | orchestrator | 2026-01-05 03:38:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:10.521018 | orchestrator | 2026-01-05 03:38:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:10.521071 | orchestrator | 2026-01-05 03:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:13.575368 | orchestrator | 2026-01-05 03:38:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:13.576685 | orchestrator | 2026-01-05 03:38:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:13.576788 | orchestrator | 2026-01-05 03:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:16.637156 | orchestrator | 2026-01-05 03:38:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:16.638701 | orchestrator | 2026-01-05 03:38:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:16.638725 | orchestrator | 2026-01-05 03:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:19.695451 | orchestrator | 2026-01-05 03:38:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:19.697555 | orchestrator | 2026-01-05 03:38:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:19.697616 | orchestrator | 2026-01-05 03:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:22.743361 | orchestrator | 2026-01-05 03:38:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:22.746153 | orchestrator | 2026-01-05 03:38:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:22.746286 | orchestrator | 2026-01-05 03:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:25.802442 | orchestrator | 2026-01-05 03:38:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:25.804178 | orchestrator | 2026-01-05 03:38:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:25.804227 | orchestrator | 2026-01-05 03:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:28.857499 | orchestrator | 2026-01-05 03:38:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:28.860190 | orchestrator | 2026-01-05 03:38:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:28.860236 | orchestrator | 2026-01-05 03:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:31.929186 | orchestrator | 2026-01-05 03:38:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:31.930952 | orchestrator | 2026-01-05 03:38:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:31.931035 | orchestrator | 2026-01-05 03:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:34.989756 | orchestrator | 2026-01-05 03:38:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:34.990825 | orchestrator | 2026-01-05 03:38:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:34.990873 | orchestrator | 2026-01-05 03:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:38.044336 | orchestrator | 2026-01-05 03:38:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:38.047112 | orchestrator | 2026-01-05 03:38:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:38.047149 | orchestrator | 2026-01-05 03:38:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:41.099493 | orchestrator | 2026-01-05 03:38:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:41.100400 | orchestrator | 2026-01-05 03:38:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:41.100420 | orchestrator | 2026-01-05 03:38:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:44.150336 | orchestrator | 2026-01-05 03:38:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:44.151717 | orchestrator | 2026-01-05 03:38:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:44.151764 | orchestrator | 2026-01-05 03:38:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:47.210680 | orchestrator | 2026-01-05 03:38:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:47.212846 | orchestrator | 2026-01-05 03:38:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:47.213017 | orchestrator | 2026-01-05 03:38:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:50.261444 | orchestrator | 2026-01-05 03:38:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:50.264768 | orchestrator | 2026-01-05 03:38:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:50.265412 | orchestrator | 2026-01-05 03:38:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:53.314742 | orchestrator | 2026-01-05 03:38:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:53.317016 | orchestrator | 2026-01-05 03:38:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:53.317065 | orchestrator | 2026-01-05 03:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:56.367711 | orchestrator | 2026-01-05 03:38:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:56.369943 | orchestrator | 2026-01-05 03:38:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:56.370009 | orchestrator | 2026-01-05 03:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:59.417233 | orchestrator | 2026-01-05 03:38:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:38:59.418396 | orchestrator | 2026-01-05 03:38:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:38:59.418495 | orchestrator | 2026-01-05 03:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:02.470712 | orchestrator | 2026-01-05 03:39:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:02.476040 | orchestrator | 2026-01-05 03:39:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:02.476123 | orchestrator | 2026-01-05 03:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:05.523341 | orchestrator | 2026-01-05 03:39:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:05.525689 | orchestrator | 2026-01-05 03:39:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:05.525737 | orchestrator | 2026-01-05 03:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:08.573423 | orchestrator | 2026-01-05 03:39:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:08.574431 | orchestrator | 2026-01-05 03:39:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:08.574628 | orchestrator | 2026-01-05 03:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:11.626272 | orchestrator | 2026-01-05 03:39:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:11.627263 | orchestrator | 2026-01-05 03:39:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:11.627353 | orchestrator | 2026-01-05 03:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:14.678219 | orchestrator | 2026-01-05 03:39:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:14.678925 | orchestrator | 2026-01-05 03:39:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:14.678978 | orchestrator | 2026-01-05 03:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:17.732108 | orchestrator | 2026-01-05 03:39:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:17.733821 | orchestrator | 2026-01-05 03:39:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:17.733874 | orchestrator | 2026-01-05 03:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:20.785120 | orchestrator | 2026-01-05 03:39:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:20.786465 | orchestrator | 2026-01-05 03:39:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:20.786510 | orchestrator | 2026-01-05 03:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:23.838845 | orchestrator | 2026-01-05 03:39:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:23.841217 | orchestrator | 2026-01-05 03:39:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:23.841266 | orchestrator | 2026-01-05 03:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:26.889439 | orchestrator | 2026-01-05 03:39:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:26.891022 | orchestrator | 2026-01-05 03:39:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:26.891139 | orchestrator | 2026-01-05 03:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:29.936256 | orchestrator | 2026-01-05 03:39:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:29.937704 | orchestrator | 2026-01-05 03:39:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:29.937753 | orchestrator | 2026-01-05 03:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:32.989347 | orchestrator | 2026-01-05 03:39:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:32.990716 | orchestrator | 2026-01-05 03:39:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:32.990812 | orchestrator | 2026-01-05 03:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:36.039543 | orchestrator | 2026-01-05 03:39:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:36.042226 | orchestrator | 2026-01-05 03:39:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:36.042362 | orchestrator | 2026-01-05 03:39:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:39.086972 | orchestrator | 2026-01-05 03:39:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:39.088963 | orchestrator | 2026-01-05 03:39:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:39.089040 | orchestrator | 2026-01-05 03:39:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:42.140936 | orchestrator | 2026-01-05 03:39:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:42.143591 | orchestrator | 2026-01-05 03:39:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:42.143690 | orchestrator | 2026-01-05 03:39:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:45.186789 | orchestrator | 2026-01-05 03:39:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:45.188392 | orchestrator | 2026-01-05 03:39:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:45.188464 | orchestrator | 2026-01-05 03:39:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:48.240850 | orchestrator | 2026-01-05 03:39:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:48.243362 | orchestrator | 2026-01-05 03:39:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:48.243536 | orchestrator | 2026-01-05 03:39:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:51.287385 | orchestrator | 2026-01-05 03:39:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:51.289216 | orchestrator | 2026-01-05 03:39:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:51.289273 | orchestrator | 2026-01-05 03:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:54.341512 | orchestrator | 2026-01-05 03:39:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:54.344968 | orchestrator | 2026-01-05 03:39:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:54.345024 | orchestrator | 2026-01-05 03:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:57.390306 | orchestrator | 2026-01-05 03:39:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:39:57.391779 | orchestrator | 2026-01-05 03:39:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:39:57.391816 | orchestrator | 2026-01-05 03:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:00.439202 | orchestrator | 2026-01-05 03:40:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:00.441645 | orchestrator | 2026-01-05 03:40:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:00.442453 | orchestrator | 2026-01-05 03:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:03.496416 | orchestrator | 2026-01-05 03:40:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:03.498367 | orchestrator | 2026-01-05 03:40:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:03.498411 | orchestrator | 2026-01-05 03:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:06.549723 | orchestrator | 2026-01-05 03:40:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:06.551710 | orchestrator | 2026-01-05 03:40:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:06.551859 | orchestrator | 2026-01-05 03:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:09.594910 | orchestrator | 2026-01-05 03:40:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:09.597140 | orchestrator | 2026-01-05 03:40:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:09.597246 | orchestrator | 2026-01-05 03:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:12.643245 | orchestrator | 2026-01-05 03:40:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:12.645059 | orchestrator | 2026-01-05 03:40:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:12.645167 | orchestrator | 2026-01-05 03:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:15.687197 | orchestrator | 2026-01-05 03:40:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:15.688980 | orchestrator | 2026-01-05 03:40:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:15.689041 | orchestrator | 2026-01-05 03:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:18.742915 | orchestrator | 2026-01-05 03:40:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:18.744029 | orchestrator | 2026-01-05 03:40:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:18.744078 | orchestrator | 2026-01-05 03:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:21.784966 | orchestrator | 2026-01-05 03:40:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:21.787610 | orchestrator | 2026-01-05 03:40:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:21.787689 | orchestrator | 2026-01-05 03:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:24.843580 | orchestrator | 2026-01-05 03:40:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:24.844971 | orchestrator | 2026-01-05 03:40:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:24.845020 | orchestrator | 2026-01-05 03:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:27.897739 | orchestrator | 2026-01-05 03:40:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:27.899808 | orchestrator | 2026-01-05 03:40:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:27.900000 | orchestrator | 2026-01-05 03:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:30.948647 | orchestrator | 2026-01-05 03:40:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:30.950357 | orchestrator | 2026-01-05 03:40:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:30.950504 | orchestrator | 2026-01-05 03:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:34.000423 | orchestrator | 2026-01-05 03:40:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:34.004328 | orchestrator | 2026-01-05 03:40:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:34.004418 | orchestrator | 2026-01-05 03:40:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:37.060449 | orchestrator | 2026-01-05 03:40:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:37.061283 | orchestrator | 2026-01-05 03:40:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:37.061597 | orchestrator | 2026-01-05 03:40:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:40.113663 | orchestrator | 2026-01-05 03:40:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:40.116216 | orchestrator | 2026-01-05 03:40:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:40.116332 | orchestrator | 2026-01-05 03:40:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:43.165348 | orchestrator | 2026-01-05 03:40:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:43.166989 | orchestrator | 2026-01-05 03:40:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:43.167055 | orchestrator | 2026-01-05 03:40:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:46.207849 | orchestrator | 2026-01-05 03:40:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:46.209285 | orchestrator | 2026-01-05 03:40:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:46.209375 | orchestrator | 2026-01-05 03:40:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:49.263855 | orchestrator | 2026-01-05 03:40:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:49.266799 | orchestrator | 2026-01-05 03:40:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:49.266906 | orchestrator | 2026-01-05 03:40:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:52.311906 | orchestrator | 2026-01-05 03:40:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:52.313868 | orchestrator | 2026-01-05 03:40:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:52.313908 | orchestrator | 2026-01-05 03:40:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:55.363999 | orchestrator | 2026-01-05 03:40:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:55.367435 | orchestrator | 2026-01-05 03:40:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:55.367571 | orchestrator | 2026-01-05 03:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:58.427233 | orchestrator | 2026-01-05 03:40:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:40:58.429940 | orchestrator | 2026-01-05 03:40:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:40:58.430059 | orchestrator | 2026-01-05 03:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:01.478913 | orchestrator | 2026-01-05 03:41:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:01.481441 | orchestrator | 2026-01-05 03:41:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:01.481506 | orchestrator | 2026-01-05 03:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:04.525665 | orchestrator | 2026-01-05 03:41:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:04.526071 | orchestrator | 2026-01-05 03:41:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:04.526096 | orchestrator | 2026-01-05 03:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:07.575832 | orchestrator | 2026-01-05 03:41:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:07.577353 | orchestrator | 2026-01-05 03:41:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:07.577705 | orchestrator | 2026-01-05 03:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:10.634113 | orchestrator | 2026-01-05 03:41:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:10.637114 | orchestrator | 2026-01-05 03:41:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:10.637285 | orchestrator | 2026-01-05 03:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:13.682410 | orchestrator | 2026-01-05 03:41:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:13.682525 | orchestrator | 2026-01-05 03:41:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:13.682544 | orchestrator | 2026-01-05 03:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:16.729347 | orchestrator | 2026-01-05 03:41:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:16.730447 | orchestrator | 2026-01-05 03:41:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:16.730519 | orchestrator | 2026-01-05 03:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:19.782947 | orchestrator | 2026-01-05 03:41:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:19.785502 | orchestrator | 2026-01-05 03:41:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:19.785584 | orchestrator | 2026-01-05 03:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:22.842850 | orchestrator | 2026-01-05 03:41:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:22.844176 | orchestrator | 2026-01-05 03:41:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:22.844364 | orchestrator | 2026-01-05 03:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:25.893095 | orchestrator | 2026-01-05 03:41:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:25.894600 | orchestrator | 2026-01-05 03:41:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:25.894648 | orchestrator | 2026-01-05 03:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:28.939361 | orchestrator | 2026-01-05 03:41:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:28.940570 | orchestrator | 2026-01-05 03:41:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:28.940773 | orchestrator | 2026-01-05 03:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:31.995993 | orchestrator | 2026-01-05 03:41:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:31.998101 | orchestrator | 2026-01-05 03:41:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:31.998172 | orchestrator | 2026-01-05 03:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:35.044967 | orchestrator | 2026-01-05 03:41:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:35.046457 | orchestrator | 2026-01-05 03:41:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:35.046551 | orchestrator | 2026-01-05 03:41:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:38.087112 | orchestrator | 2026-01-05 03:41:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:38.088193 | orchestrator | 2026-01-05 03:41:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:38.088289 | orchestrator | 2026-01-05 03:41:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:41.129340 | orchestrator | 2026-01-05 03:41:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:41.129816 | orchestrator | 2026-01-05 03:41:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:41.130184 | orchestrator | 2026-01-05 03:41:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:44.174295 | orchestrator | 2026-01-05 03:41:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:44.175231 | orchestrator | 2026-01-05 03:41:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:44.175262 | orchestrator | 2026-01-05 03:41:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:47.230907 | orchestrator | 2026-01-05 03:41:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:47.232313 | orchestrator | 2026-01-05 03:41:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:47.232329 | orchestrator | 2026-01-05 03:41:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:50.281846 | orchestrator | 2026-01-05 03:41:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:50.283847 | orchestrator | 2026-01-05 03:41:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:50.284013 | orchestrator | 2026-01-05 03:41:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:53.340220 | orchestrator | 2026-01-05 03:41:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:53.342115 | orchestrator | 2026-01-05 03:41:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:53.342167 | orchestrator | 2026-01-05 03:41:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:56.384678 | orchestrator | 2026-01-05 03:41:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:56.387078 | orchestrator | 2026-01-05 03:41:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:56.387203 | orchestrator | 2026-01-05 03:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:59.439434 | orchestrator | 2026-01-05 03:41:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:41:59.440898 | orchestrator | 2026-01-05 03:41:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:41:59.441061 | orchestrator | 2026-01-05 03:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:02.485109 | orchestrator | 2026-01-05 03:42:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:02.487400 | orchestrator | 2026-01-05 03:42:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:02.487471 | orchestrator | 2026-01-05 03:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:05.534116 | orchestrator | 2026-01-05 03:42:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:05.534572 | orchestrator | 2026-01-05 03:42:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:05.534597 | orchestrator | 2026-01-05 03:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:08.591692 | orchestrator | 2026-01-05 03:42:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:08.593801 | orchestrator | 2026-01-05 03:42:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:08.594270 | orchestrator | 2026-01-05 03:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:11.641618 | orchestrator | 2026-01-05 03:42:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:11.643517 | orchestrator | 2026-01-05 03:42:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:11.643585 | orchestrator | 2026-01-05 03:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:14.690578 | orchestrator | 2026-01-05 03:42:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:14.692573 | orchestrator | 2026-01-05 03:42:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:14.692610 | orchestrator | 2026-01-05 03:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:17.738973 | orchestrator | 2026-01-05 03:42:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:17.742566 | orchestrator | 2026-01-05 03:42:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:17.742663 | orchestrator | 2026-01-05 03:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:20.793869 | orchestrator | 2026-01-05 03:42:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:20.796728 | orchestrator | 2026-01-05 03:42:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:20.796787 | orchestrator | 2026-01-05 03:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:23.841508 | orchestrator | 2026-01-05 03:42:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:23.842701 | orchestrator | 2026-01-05 03:42:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:23.842748 | orchestrator | 2026-01-05 03:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:26.887868 | orchestrator | 2026-01-05 03:42:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:26.889809 | orchestrator | 2026-01-05 03:42:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:26.890346 | orchestrator | 2026-01-05 03:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:29.931938 | orchestrator | 2026-01-05 03:42:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:29.935027 | orchestrator | 2026-01-05 03:42:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:29.935171 | orchestrator | 2026-01-05 03:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:32.982309 | orchestrator | 2026-01-05 03:42:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:32.983794 | orchestrator | 2026-01-05 03:42:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:32.983828 | orchestrator | 2026-01-05 03:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:36.041664 | orchestrator | 2026-01-05 03:42:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:36.043274 | orchestrator | 2026-01-05 03:42:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:36.043342 | orchestrator | 2026-01-05 03:42:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:39.092523 | orchestrator | 2026-01-05 03:42:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:39.094962 | orchestrator | 2026-01-05 03:42:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:39.095016 | orchestrator | 2026-01-05 03:42:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:42.147450 | orchestrator | 2026-01-05 03:42:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:42.149153 | orchestrator | 2026-01-05 03:42:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:42.149228 | orchestrator | 2026-01-05 03:42:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:45.200451 | orchestrator | 2026-01-05 03:42:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:45.206100 | orchestrator | 2026-01-05 03:42:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:45.206176 | orchestrator | 2026-01-05 03:42:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:48.260402 | orchestrator | 2026-01-05 03:42:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:48.263614 | orchestrator | 2026-01-05 03:42:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:48.263698 | orchestrator | 2026-01-05 03:42:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:51.308200 | orchestrator | 2026-01-05 03:42:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:51.309089 | orchestrator | 2026-01-05 03:42:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:51.309180 | orchestrator | 2026-01-05 03:42:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:54.362302 | orchestrator | 2026-01-05 03:42:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:54.364511 | orchestrator | 2026-01-05 03:42:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:54.364620 | orchestrator | 2026-01-05 03:42:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:57.411242 | orchestrator | 2026-01-05 03:42:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:42:57.413327 | orchestrator | 2026-01-05 03:42:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:42:57.413400 | orchestrator | 2026-01-05 03:42:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:00.467871 | orchestrator | 2026-01-05 03:43:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:00.469887 | orchestrator | 2026-01-05 03:43:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:00.470007 | orchestrator | 2026-01-05 03:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:03.520739 | orchestrator | 2026-01-05 03:43:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:03.522734 | orchestrator | 2026-01-05 03:43:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:03.522797 | orchestrator | 2026-01-05 03:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:06.575152 | orchestrator | 2026-01-05 03:43:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:06.578594 | orchestrator | 2026-01-05 03:43:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:06.578693 | orchestrator | 2026-01-05 03:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:09.631186 | orchestrator | 2026-01-05 03:43:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:09.633437 | orchestrator | 2026-01-05 03:43:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:09.633536 | orchestrator | 2026-01-05 03:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:12.686868 | orchestrator | 2026-01-05 03:43:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:12.688524 | orchestrator | 2026-01-05 03:43:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:12.688577 | orchestrator | 2026-01-05 03:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:15.739290 | orchestrator | 2026-01-05 03:43:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:15.742867 | orchestrator | 2026-01-05 03:43:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:15.742936 | orchestrator | 2026-01-05 03:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:18.791766 | orchestrator | 2026-01-05 03:43:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:18.793633 | orchestrator | 2026-01-05 03:43:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:18.793713 | orchestrator | 2026-01-05 03:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:21.843531 | orchestrator | 2026-01-05 03:43:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:21.843700 | orchestrator | 2026-01-05 03:43:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:21.843751 | orchestrator | 2026-01-05 03:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:24.899382 | orchestrator | 2026-01-05 03:43:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:24.900916 | orchestrator | 2026-01-05 03:43:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:24.900980 | orchestrator | 2026-01-05 03:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:27.953994 | orchestrator | 2026-01-05 03:43:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:27.956035 | orchestrator | 2026-01-05 03:43:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:27.956091 | orchestrator | 2026-01-05 03:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:30.998226 | orchestrator | 2026-01-05 03:43:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:30.999917 | orchestrator | 2026-01-05 03:43:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:31.000067 | orchestrator | 2026-01-05 03:43:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:34.050717 | orchestrator | 2026-01-05 03:43:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:34.051925 | orchestrator | 2026-01-05 03:43:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:34.051964 | orchestrator | 2026-01-05 03:43:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:37.101591 | orchestrator | 2026-01-05 03:43:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:37.103134 | orchestrator | 2026-01-05 03:43:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:37.103161 | orchestrator | 2026-01-05 03:43:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:40.150149 | orchestrator | 2026-01-05 03:43:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:40.152071 | orchestrator | 2026-01-05 03:43:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:40.152152 | orchestrator | 2026-01-05 03:43:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:43.203785 | orchestrator | 2026-01-05 03:43:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:43.203965 | orchestrator | 2026-01-05 03:43:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:43.204137 | orchestrator | 2026-01-05 03:43:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:46.247902 | orchestrator | 2026-01-05 03:43:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:46.248701 | orchestrator | 2026-01-05 03:43:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:46.248767 | orchestrator | 2026-01-05 03:43:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:49.303035 | orchestrator | 2026-01-05 03:43:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:49.304822 | orchestrator | 2026-01-05 03:43:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:49.304870 | orchestrator | 2026-01-05 03:43:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:52.351608 | orchestrator | 2026-01-05 03:43:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:52.354165 | orchestrator | 2026-01-05 03:43:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:52.354261 | orchestrator | 2026-01-05 03:43:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:55.402715 | orchestrator | 2026-01-05 03:43:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:55.405219 | orchestrator | 2026-01-05 03:43:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:55.405282 | orchestrator | 2026-01-05 03:43:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:58.449623 | orchestrator | 2026-01-05 03:43:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:43:58.451251 | orchestrator | 2026-01-05 03:43:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:43:58.451347 | orchestrator | 2026-01-05 03:43:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:01.496819 | orchestrator | 2026-01-05 03:44:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:01.498383 | orchestrator | 2026-01-05 03:44:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:01.498446 | orchestrator | 2026-01-05 03:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:04.544943 | orchestrator | 2026-01-05 03:44:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:04.546323 | orchestrator | 2026-01-05 03:44:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:04.546483 | orchestrator | 2026-01-05 03:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:07.589971 | orchestrator | 2026-01-05 03:44:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:07.590757 | orchestrator | 2026-01-05 03:44:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:07.590803 | orchestrator | 2026-01-05 03:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:10.639291 | orchestrator | 2026-01-05 03:44:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:10.641332 | orchestrator | 2026-01-05 03:44:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:10.641381 | orchestrator | 2026-01-05 03:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:13.686295 | orchestrator | 2026-01-05 03:44:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:13.687383 | orchestrator | 2026-01-05 03:44:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:13.687554 | orchestrator | 2026-01-05 03:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:16.736232 | orchestrator | 2026-01-05 03:44:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:16.736741 | orchestrator | 2026-01-05 03:44:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:16.736788 | orchestrator | 2026-01-05 03:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:19.785245 | orchestrator | 2026-01-05 03:44:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:19.787745 | orchestrator | 2026-01-05 03:44:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:19.787804 | orchestrator | 2026-01-05 03:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:22.834350 | orchestrator | 2026-01-05 03:44:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:22.835790 | orchestrator | 2026-01-05 03:44:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:22.835852 | orchestrator | 2026-01-05 03:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:25.888155 | orchestrator | 2026-01-05 03:44:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:25.889013 | orchestrator | 2026-01-05 03:44:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:25.889327 | orchestrator | 2026-01-05 03:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:28.934334 | orchestrator | 2026-01-05 03:44:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:28.934809 | orchestrator | 2026-01-05 03:44:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:28.934854 | orchestrator | 2026-01-05 03:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:31.984559 | orchestrator | 2026-01-05 03:44:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:31.985934 | orchestrator | 2026-01-05 03:44:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:31.985996 | orchestrator | 2026-01-05 03:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:35.045570 | orchestrator | 2026-01-05 03:44:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:35.049014 | orchestrator | 2026-01-05 03:44:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:35.049067 | orchestrator | 2026-01-05 03:44:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:38.103549 | orchestrator | 2026-01-05 03:44:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:38.103872 | orchestrator | 2026-01-05 03:44:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:38.103930 | orchestrator | 2026-01-05 03:44:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:41.162708 | orchestrator | 2026-01-05 03:44:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:41.164616 | orchestrator | 2026-01-05 03:44:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:41.164775 | orchestrator | 2026-01-05 03:44:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:44.212546 | orchestrator | 2026-01-05 03:44:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:44.214252 | orchestrator | 2026-01-05 03:44:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:44.214453 | orchestrator | 2026-01-05 03:44:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:47.260227 | orchestrator | 2026-01-05 03:44:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:47.264349 | orchestrator | 2026-01-05 03:44:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:47.264432 | orchestrator | 2026-01-05 03:44:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:50.310520 | orchestrator | 2026-01-05 03:44:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:50.312622 | orchestrator | 2026-01-05 03:44:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:50.312663 | orchestrator | 2026-01-05 03:44:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:53.355112 | orchestrator | 2026-01-05 03:44:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:53.356190 | orchestrator | 2026-01-05 03:44:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:53.356343 | orchestrator | 2026-01-05 03:44:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:56.410259 | orchestrator | 2026-01-05 03:44:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:56.412766 | orchestrator | 2026-01-05 03:44:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:56.412859 | orchestrator | 2026-01-05 03:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:59.458482 | orchestrator | 2026-01-05 03:44:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:44:59.461968 | orchestrator | 2026-01-05 03:44:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:44:59.462206 | orchestrator | 2026-01-05 03:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:02.506830 | orchestrator | 2026-01-05 03:45:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:02.510297 | orchestrator | 2026-01-05 03:45:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:02.510376 | orchestrator | 2026-01-05 03:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:05.566835 | orchestrator | 2026-01-05 03:45:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:05.569107 | orchestrator | 2026-01-05 03:45:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:05.569146 | orchestrator | 2026-01-05 03:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:08.626950 | orchestrator | 2026-01-05 03:45:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:08.628495 | orchestrator | 2026-01-05 03:45:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:08.628577 | orchestrator | 2026-01-05 03:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:11.683271 | orchestrator | 2026-01-05 03:45:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:11.685995 | orchestrator | 2026-01-05 03:45:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:11.686082 | orchestrator | 2026-01-05 03:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:14.741985 | orchestrator | 2026-01-05 03:45:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:14.745660 | orchestrator | 2026-01-05 03:45:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:14.745849 | orchestrator | 2026-01-05 03:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:17.804444 | orchestrator | 2026-01-05 03:45:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:17.806409 | orchestrator | 2026-01-05 03:45:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:17.806466 | orchestrator | 2026-01-05 03:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:20.860940 | orchestrator | 2026-01-05 03:45:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:20.863303 | orchestrator | 2026-01-05 03:45:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:20.863354 | orchestrator | 2026-01-05 03:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:23.913880 | orchestrator | 2026-01-05 03:45:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:23.915859 | orchestrator | 2026-01-05 03:45:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:23.915934 | orchestrator | 2026-01-05 03:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:26.968083 | orchestrator | 2026-01-05 03:45:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:26.969749 | orchestrator | 2026-01-05 03:45:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:26.969837 | orchestrator | 2026-01-05 03:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:30.023940 | orchestrator | 2026-01-05 03:45:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:30.025842 | orchestrator | 2026-01-05 03:45:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:30.025900 | orchestrator | 2026-01-05 03:45:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:33.067944 | orchestrator | 2026-01-05 03:45:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:33.071746 | orchestrator | 2026-01-05 03:45:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:33.071867 | orchestrator | 2026-01-05 03:45:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:36.130608 | orchestrator | 2026-01-05 03:45:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:36.132799 | orchestrator | 2026-01-05 03:45:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:36.132912 | orchestrator | 2026-01-05 03:45:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:39.186432 | orchestrator | 2026-01-05 03:45:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:39.187454 | orchestrator | 2026-01-05 03:45:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:39.187487 | orchestrator | 2026-01-05 03:45:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:42.244473 | orchestrator | 2026-01-05 03:45:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:42.246409 | orchestrator | 2026-01-05 03:45:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:42.246503 | orchestrator | 2026-01-05 03:45:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:45.305353 | orchestrator | 2026-01-05 03:45:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:45.308974 | orchestrator | 2026-01-05 03:45:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:45.309076 | orchestrator | 2026-01-05 03:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:48.357018 | orchestrator | 2026-01-05 03:45:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:48.358179 | orchestrator | 2026-01-05 03:45:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:48.358232 | orchestrator | 2026-01-05 03:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:51.404524 | orchestrator | 2026-01-05 03:45:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:51.406376 | orchestrator | 2026-01-05 03:45:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:51.406459 | orchestrator | 2026-01-05 03:45:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:54.459035 | orchestrator | 2026-01-05 03:45:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:54.462643 | orchestrator | 2026-01-05 03:45:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:54.462740 | orchestrator | 2026-01-05 03:45:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:57.515017 | orchestrator | 2026-01-05 03:45:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:45:57.517010 | orchestrator | 2026-01-05 03:45:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:45:57.517066 | orchestrator | 2026-01-05 03:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:00.570412 | orchestrator | 2026-01-05 03:46:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:00.572617 | orchestrator | 2026-01-05 03:46:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:00.572729 | orchestrator | 2026-01-05 03:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:03.627463 | orchestrator | 2026-01-05 03:46:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:03.628233 | orchestrator | 2026-01-05 03:46:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:03.628283 | orchestrator | 2026-01-05 03:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:06.695787 | orchestrator | 2026-01-05 03:46:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:06.697602 | orchestrator | 2026-01-05 03:46:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:06.697752 | orchestrator | 2026-01-05 03:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:09.751215 | orchestrator | 2026-01-05 03:46:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:09.752979 | orchestrator | 2026-01-05 03:46:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:09.753032 | orchestrator | 2026-01-05 03:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:12.802431 | orchestrator | 2026-01-05 03:46:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:12.804226 | orchestrator | 2026-01-05 03:46:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:12.804430 | orchestrator | 2026-01-05 03:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:15.855892 | orchestrator | 2026-01-05 03:46:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:15.856907 | orchestrator | 2026-01-05 03:46:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:15.857094 | orchestrator | 2026-01-05 03:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:18.912719 | orchestrator | 2026-01-05 03:46:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:18.914363 | orchestrator | 2026-01-05 03:46:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:18.914458 | orchestrator | 2026-01-05 03:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:21.968142 | orchestrator | 2026-01-05 03:46:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:21.969181 | orchestrator | 2026-01-05 03:46:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:21.969218 | orchestrator | 2026-01-05 03:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:25.024380 | orchestrator | 2026-01-05 03:46:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:25.026975 | orchestrator | 2026-01-05 03:46:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:25.027333 | orchestrator | 2026-01-05 03:46:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:28.082409 | orchestrator | 2026-01-05 03:46:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:28.082591 | orchestrator | 2026-01-05 03:46:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:28.083536 | orchestrator | 2026-01-05 03:46:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:31.135397 | orchestrator | 2026-01-05 03:46:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:31.139540 | orchestrator | 2026-01-05 03:46:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:31.139615 | orchestrator | 2026-01-05 03:46:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:34.200188 | orchestrator | 2026-01-05 03:46:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:34.201608 | orchestrator | 2026-01-05 03:46:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:34.201658 | orchestrator | 2026-01-05 03:46:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:37.258129 | orchestrator | 2026-01-05 03:46:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:37.260327 | orchestrator | 2026-01-05 03:46:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:37.260412 | orchestrator | 2026-01-05 03:46:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:40.312983 | orchestrator | 2026-01-05 03:46:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:40.314383 | orchestrator | 2026-01-05 03:46:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:40.314552 | orchestrator | 2026-01-05 03:46:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:43.358145 | orchestrator | 2026-01-05 03:46:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:43.361652 | orchestrator | 2026-01-05 03:46:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:43.361725 | orchestrator | 2026-01-05 03:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:46.411638 | orchestrator | 2026-01-05 03:46:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:46.414374 | orchestrator | 2026-01-05 03:46:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:46.414515 | orchestrator | 2026-01-05 03:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:49.459434 | orchestrator | 2026-01-05 03:46:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:49.461459 | orchestrator | 2026-01-05 03:46:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:49.461507 | orchestrator | 2026-01-05 03:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:52.514521 | orchestrator | 2026-01-05 03:46:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:52.515967 | orchestrator | 2026-01-05 03:46:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:52.516571 | orchestrator | 2026-01-05 03:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:55.569706 | orchestrator | 2026-01-05 03:46:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:55.572014 | orchestrator | 2026-01-05 03:46:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:55.572051 | orchestrator | 2026-01-05 03:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:58.629995 | orchestrator | 2026-01-05 03:46:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:46:58.632807 | orchestrator | 2026-01-05 03:46:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:46:58.632861 | orchestrator | 2026-01-05 03:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:01.686530 | orchestrator | 2026-01-05 03:47:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:01.689684 | orchestrator | 2026-01-05 03:47:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:01.689759 | orchestrator | 2026-01-05 03:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:04.740328 | orchestrator | 2026-01-05 03:47:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:04.743405 | orchestrator | 2026-01-05 03:47:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:04.743645 | orchestrator | 2026-01-05 03:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:07.804509 | orchestrator | 2026-01-05 03:47:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:07.806187 | orchestrator | 2026-01-05 03:47:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:07.806819 | orchestrator | 2026-01-05 03:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:10.857786 | orchestrator | 2026-01-05 03:47:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:10.859749 | orchestrator | 2026-01-05 03:47:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:10.859770 | orchestrator | 2026-01-05 03:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:13.910097 | orchestrator | 2026-01-05 03:47:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:13.911596 | orchestrator | 2026-01-05 03:47:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:13.911690 | orchestrator | 2026-01-05 03:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:16.963469 | orchestrator | 2026-01-05 03:47:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:16.965723 | orchestrator | 2026-01-05 03:47:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:16.965767 | orchestrator | 2026-01-05 03:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:20.021215 | orchestrator | 2026-01-05 03:47:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:20.023409 | orchestrator | 2026-01-05 03:47:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:20.023481 | orchestrator | 2026-01-05 03:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:23.068285 | orchestrator | 2026-01-05 03:47:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:23.068723 | orchestrator | 2026-01-05 03:47:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:23.068767 | orchestrator | 2026-01-05 03:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:26.117650 | orchestrator | 2026-01-05 03:47:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:26.121341 | orchestrator | 2026-01-05 03:47:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:26.121398 | orchestrator | 2026-01-05 03:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:29.178065 | orchestrator | 2026-01-05 03:47:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:29.179462 | orchestrator | 2026-01-05 03:47:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:29.179565 | orchestrator | 2026-01-05 03:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:32.230739 | orchestrator | 2026-01-05 03:47:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:32.234807 | orchestrator | 2026-01-05 03:47:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:32.234876 | orchestrator | 2026-01-05 03:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:35.283102 | orchestrator | 2026-01-05 03:47:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:35.284530 | orchestrator | 2026-01-05 03:47:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:35.284598 | orchestrator | 2026-01-05 03:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:38.334712 | orchestrator | 2026-01-05 03:47:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:38.337031 | orchestrator | 2026-01-05 03:47:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:38.337194 | orchestrator | 2026-01-05 03:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:41.386673 | orchestrator | 2026-01-05 03:47:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:41.388521 | orchestrator | 2026-01-05 03:47:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:41.388603 | orchestrator | 2026-01-05 03:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:44.439441 | orchestrator | 2026-01-05 03:47:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:44.441682 | orchestrator | 2026-01-05 03:47:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:44.441796 | orchestrator | 2026-01-05 03:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:47.499551 | orchestrator | 2026-01-05 03:47:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:47.501327 | orchestrator | 2026-01-05 03:47:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:47.501419 | orchestrator | 2026-01-05 03:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:50.552914 | orchestrator | 2026-01-05 03:47:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:50.554440 | orchestrator | 2026-01-05 03:47:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:50.554622 | orchestrator | 2026-01-05 03:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:53.602644 | orchestrator | 2026-01-05 03:47:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:53.604532 | orchestrator | 2026-01-05 03:47:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:53.604623 | orchestrator | 2026-01-05 03:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:56.650769 | orchestrator | 2026-01-05 03:47:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:56.652207 | orchestrator | 2026-01-05 03:47:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:56.652234 | orchestrator | 2026-01-05 03:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:59.705852 | orchestrator | 2026-01-05 03:47:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:47:59.707896 | orchestrator | 2026-01-05 03:47:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:47:59.708001 | orchestrator | 2026-01-05 03:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:02.753410 | orchestrator | 2026-01-05 03:48:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:02.755610 | orchestrator | 2026-01-05 03:48:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:02.755692 | orchestrator | 2026-01-05 03:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:05.807243 | orchestrator | 2026-01-05 03:48:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:05.808439 | orchestrator | 2026-01-05 03:48:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:05.808468 | orchestrator | 2026-01-05 03:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:08.850658 | orchestrator | 2026-01-05 03:48:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:08.851468 | orchestrator | 2026-01-05 03:48:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:08.851510 | orchestrator | 2026-01-05 03:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:11.896207 | orchestrator | 2026-01-05 03:48:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:11.898857 | orchestrator | 2026-01-05 03:48:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:11.898933 | orchestrator | 2026-01-05 03:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:14.944646 | orchestrator | 2026-01-05 03:48:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:14.947650 | orchestrator | 2026-01-05 03:48:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:14.947821 | orchestrator | 2026-01-05 03:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:17.997869 | orchestrator | 2026-01-05 03:48:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:18.001647 | orchestrator | 2026-01-05 03:48:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:18.001758 | orchestrator | 2026-01-05 03:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:21.049211 | orchestrator | 2026-01-05 03:48:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:21.050261 | orchestrator | 2026-01-05 03:48:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:21.050333 | orchestrator | 2026-01-05 03:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:24.091024 | orchestrator | 2026-01-05 03:48:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:24.092120 | orchestrator | 2026-01-05 03:48:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:24.092150 | orchestrator | 2026-01-05 03:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:27.137525 | orchestrator | 2026-01-05 03:48:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:27.139683 | orchestrator | 2026-01-05 03:48:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:27.139790 | orchestrator | 2026-01-05 03:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:30.188326 | orchestrator | 2026-01-05 03:48:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:30.191012 | orchestrator | 2026-01-05 03:48:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:30.191065 | orchestrator | 2026-01-05 03:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:33.239147 | orchestrator | 2026-01-05 03:48:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:33.240393 | orchestrator | 2026-01-05 03:48:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:33.240428 | orchestrator | 2026-01-05 03:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:36.283967 | orchestrator | 2026-01-05 03:48:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:36.286608 | orchestrator | 2026-01-05 03:48:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:36.286698 | orchestrator | 2026-01-05 03:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:39.331381 | orchestrator | 2026-01-05 03:48:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:39.331867 | orchestrator | 2026-01-05 03:48:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:39.332470 | orchestrator | 2026-01-05 03:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:42.389495 | orchestrator | 2026-01-05 03:48:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:42.391031 | orchestrator | 2026-01-05 03:48:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:42.391062 | orchestrator | 2026-01-05 03:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:45.437597 | orchestrator | 2026-01-05 03:48:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:45.438655 | orchestrator | 2026-01-05 03:48:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:45.438730 | orchestrator | 2026-01-05 03:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:48.487164 | orchestrator | 2026-01-05 03:48:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:48.488788 | orchestrator | 2026-01-05 03:48:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:48.488828 | orchestrator | 2026-01-05 03:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:51.540841 | orchestrator | 2026-01-05 03:48:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:51.544873 | orchestrator | 2026-01-05 03:48:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:51.545421 | orchestrator | 2026-01-05 03:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:54.601663 | orchestrator | 2026-01-05 03:48:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:54.606160 | orchestrator | 2026-01-05 03:48:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:54.606235 | orchestrator | 2026-01-05 03:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:57.647953 | orchestrator | 2026-01-05 03:48:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:48:57.649791 | orchestrator | 2026-01-05 03:48:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:48:57.649954 | orchestrator | 2026-01-05 03:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:00.695083 | orchestrator | 2026-01-05 03:49:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:00.696851 | orchestrator | 2026-01-05 03:49:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:00.696893 | orchestrator | 2026-01-05 03:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:03.743812 | orchestrator | 2026-01-05 03:49:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:03.745720 | orchestrator | 2026-01-05 03:49:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:03.745779 | orchestrator | 2026-01-05 03:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:06.800251 | orchestrator | 2026-01-05 03:49:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:06.801741 | orchestrator | 2026-01-05 03:49:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:06.801774 | orchestrator | 2026-01-05 03:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:09.845863 | orchestrator | 2026-01-05 03:49:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:09.847929 | orchestrator | 2026-01-05 03:49:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:09.847976 | orchestrator | 2026-01-05 03:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:12.888610 | orchestrator | 2026-01-05 03:49:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:12.890596 | orchestrator | 2026-01-05 03:49:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:12.890656 | orchestrator | 2026-01-05 03:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:15.939615 | orchestrator | 2026-01-05 03:49:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:15.942063 | orchestrator | 2026-01-05 03:49:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:15.942114 | orchestrator | 2026-01-05 03:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:18.992653 | orchestrator | 2026-01-05 03:49:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:18.994654 | orchestrator | 2026-01-05 03:49:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:18.994705 | orchestrator | 2026-01-05 03:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:22.051660 | orchestrator | 2026-01-05 03:49:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:22.053663 | orchestrator | 2026-01-05 03:49:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:22.053719 | orchestrator | 2026-01-05 03:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:25.104264 | orchestrator | 2026-01-05 03:49:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:25.106273 | orchestrator | 2026-01-05 03:49:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:25.106333 | orchestrator | 2026-01-05 03:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:28.153703 | orchestrator | 2026-01-05 03:49:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:28.156311 | orchestrator | 2026-01-05 03:49:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:28.156392 | orchestrator | 2026-01-05 03:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:31.204935 | orchestrator | 2026-01-05 03:49:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:31.206546 | orchestrator | 2026-01-05 03:49:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:31.206573 | orchestrator | 2026-01-05 03:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:34.254430 | orchestrator | 2026-01-05 03:49:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:34.254690 | orchestrator | 2026-01-05 03:49:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:34.254710 | orchestrator | 2026-01-05 03:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:37.304927 | orchestrator | 2026-01-05 03:49:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:37.305168 | orchestrator | 2026-01-05 03:49:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:37.305435 | orchestrator | 2026-01-05 03:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:40.351608 | orchestrator | 2026-01-05 03:49:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:40.353273 | orchestrator | 2026-01-05 03:49:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:40.353347 | orchestrator | 2026-01-05 03:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:43.404363 | orchestrator | 2026-01-05 03:49:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:43.405721 | orchestrator | 2026-01-05 03:49:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:43.406231 | orchestrator | 2026-01-05 03:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:46.455501 | orchestrator | 2026-01-05 03:49:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:46.458226 | orchestrator | 2026-01-05 03:49:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:46.458331 | orchestrator | 2026-01-05 03:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:49.506887 | orchestrator | 2026-01-05 03:49:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:49.508683 | orchestrator | 2026-01-05 03:49:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:49.508787 | orchestrator | 2026-01-05 03:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:52.550760 | orchestrator | 2026-01-05 03:49:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:52.552293 | orchestrator | 2026-01-05 03:49:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:52.552341 | orchestrator | 2026-01-05 03:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:55.592432 | orchestrator | 2026-01-05 03:49:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:55.594159 | orchestrator | 2026-01-05 03:49:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:55.594200 | orchestrator | 2026-01-05 03:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:58.641820 | orchestrator | 2026-01-05 03:49:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:49:58.644073 | orchestrator | 2026-01-05 03:49:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:49:58.644126 | orchestrator | 2026-01-05 03:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:01.686452 | orchestrator | 2026-01-05 03:50:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:01.687494 | orchestrator | 2026-01-05 03:50:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:01.687531 | orchestrator | 2026-01-05 03:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:04.746452 | orchestrator | 2026-01-05 03:50:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:04.747127 | orchestrator | 2026-01-05 03:50:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:04.747174 | orchestrator | 2026-01-05 03:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:07.794305 | orchestrator | 2026-01-05 03:50:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:07.794653 | orchestrator | 2026-01-05 03:50:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:07.794683 | orchestrator | 2026-01-05 03:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:10.848851 | orchestrator | 2026-01-05 03:50:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:10.851496 | orchestrator | 2026-01-05 03:50:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:10.851560 | orchestrator | 2026-01-05 03:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:13.901035 | orchestrator | 2026-01-05 03:50:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:13.902129 | orchestrator | 2026-01-05 03:50:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:13.902184 | orchestrator | 2026-01-05 03:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:16.956418 | orchestrator | 2026-01-05 03:50:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:16.959390 | orchestrator | 2026-01-05 03:50:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:16.959427 | orchestrator | 2026-01-05 03:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:20.016390 | orchestrator | 2026-01-05 03:50:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:20.018872 | orchestrator | 2026-01-05 03:50:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:20.018967 | orchestrator | 2026-01-05 03:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:23.061504 | orchestrator | 2026-01-05 03:50:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:23.062838 | orchestrator | 2026-01-05 03:50:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:23.062886 | orchestrator | 2026-01-05 03:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:26.109765 | orchestrator | 2026-01-05 03:50:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:26.111420 | orchestrator | 2026-01-05 03:50:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:26.111452 | orchestrator | 2026-01-05 03:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:29.161201 | orchestrator | 2026-01-05 03:50:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:29.162757 | orchestrator | 2026-01-05 03:50:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:29.162892 | orchestrator | 2026-01-05 03:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:32.213787 | orchestrator | 2026-01-05 03:50:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:32.215404 | orchestrator | 2026-01-05 03:50:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:32.215501 | orchestrator | 2026-01-05 03:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:35.261515 | orchestrator | 2026-01-05 03:50:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:35.262476 | orchestrator | 2026-01-05 03:50:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:35.262524 | orchestrator | 2026-01-05 03:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:38.305972 | orchestrator | 2026-01-05 03:50:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:38.308119 | orchestrator | 2026-01-05 03:50:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:38.308179 | orchestrator | 2026-01-05 03:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:41.359163 | orchestrator | 2026-01-05 03:50:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:41.360516 | orchestrator | 2026-01-05 03:50:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:41.360929 | orchestrator | 2026-01-05 03:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:44.410424 | orchestrator | 2026-01-05 03:50:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:44.412625 | orchestrator | 2026-01-05 03:50:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:44.412692 | orchestrator | 2026-01-05 03:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:47.459176 | orchestrator | 2026-01-05 03:50:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:47.461403 | orchestrator | 2026-01-05 03:50:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:47.461687 | orchestrator | 2026-01-05 03:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:50.509344 | orchestrator | 2026-01-05 03:50:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:50.511054 | orchestrator | 2026-01-05 03:50:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:50.511360 | orchestrator | 2026-01-05 03:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:53.560728 | orchestrator | 2026-01-05 03:50:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:53.562011 | orchestrator | 2026-01-05 03:50:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:53.562122 | orchestrator | 2026-01-05 03:50:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:56.611890 | orchestrator | 2026-01-05 03:50:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:56.613252 | orchestrator | 2026-01-05 03:50:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:56.613469 | orchestrator | 2026-01-05 03:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:59.667242 | orchestrator | 2026-01-05 03:50:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:50:59.668721 | orchestrator | 2026-01-05 03:50:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:50:59.668762 | orchestrator | 2026-01-05 03:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:02.717677 | orchestrator | 2026-01-05 03:51:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:02.720201 | orchestrator | 2026-01-05 03:51:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:02.720283 | orchestrator | 2026-01-05 03:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:05.774462 | orchestrator | 2026-01-05 03:51:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:05.777398 | orchestrator | 2026-01-05 03:51:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:05.777583 | orchestrator | 2026-01-05 03:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:08.817391 | orchestrator | 2026-01-05 03:51:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:08.819359 | orchestrator | 2026-01-05 03:51:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:08.819441 | orchestrator | 2026-01-05 03:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:11.876024 | orchestrator | 2026-01-05 03:51:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:11.877745 | orchestrator | 2026-01-05 03:51:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:11.877789 | orchestrator | 2026-01-05 03:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:14.924292 | orchestrator | 2026-01-05 03:51:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:14.926100 | orchestrator | 2026-01-05 03:51:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:14.926202 | orchestrator | 2026-01-05 03:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:17.971528 | orchestrator | 2026-01-05 03:51:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:17.973228 | orchestrator | 2026-01-05 03:51:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:17.973286 | orchestrator | 2026-01-05 03:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:21.022923 | orchestrator | 2026-01-05 03:51:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:21.025095 | orchestrator | 2026-01-05 03:51:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:21.025135 | orchestrator | 2026-01-05 03:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:24.072385 | orchestrator | 2026-01-05 03:51:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:24.072908 | orchestrator | 2026-01-05 03:51:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:24.073114 | orchestrator | 2026-01-05 03:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:27.114851 | orchestrator | 2026-01-05 03:51:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:27.116887 | orchestrator | 2026-01-05 03:51:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:27.116932 | orchestrator | 2026-01-05 03:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:30.166992 | orchestrator | 2026-01-05 03:51:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:30.169628 | orchestrator | 2026-01-05 03:51:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:30.169709 | orchestrator | 2026-01-05 03:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:33.219828 | orchestrator | 2026-01-05 03:51:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:33.221726 | orchestrator | 2026-01-05 03:51:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:33.221792 | orchestrator | 2026-01-05 03:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:36.274321 | orchestrator | 2026-01-05 03:51:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:36.276411 | orchestrator | 2026-01-05 03:51:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:36.276496 | orchestrator | 2026-01-05 03:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:39.327655 | orchestrator | 2026-01-05 03:51:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:39.330214 | orchestrator | 2026-01-05 03:51:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:39.330285 | orchestrator | 2026-01-05 03:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:42.381107 | orchestrator | 2026-01-05 03:51:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:42.383682 | orchestrator | 2026-01-05 03:51:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:42.383736 | orchestrator | 2026-01-05 03:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:45.434845 | orchestrator | 2026-01-05 03:51:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:45.436848 | orchestrator | 2026-01-05 03:51:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:45.436892 | orchestrator | 2026-01-05 03:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:48.485736 | orchestrator | 2026-01-05 03:51:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:48.487447 | orchestrator | 2026-01-05 03:51:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:48.487485 | orchestrator | 2026-01-05 03:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:51.536595 | orchestrator | 2026-01-05 03:51:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:51.538300 | orchestrator | 2026-01-05 03:51:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:51.538415 | orchestrator | 2026-01-05 03:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:54.595150 | orchestrator | 2026-01-05 03:51:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:54.599145 | orchestrator | 2026-01-05 03:51:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:54.599237 | orchestrator | 2026-01-05 03:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:57.656847 | orchestrator | 2026-01-05 03:51:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:51:57.659674 | orchestrator | 2026-01-05 03:51:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:51:57.659875 | orchestrator | 2026-01-05 03:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:00.709901 | orchestrator | 2026-01-05 03:52:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:00.710218 | orchestrator | 2026-01-05 03:52:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:00.710249 | orchestrator | 2026-01-05 03:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:03.763987 | orchestrator | 2026-01-05 03:52:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:03.765907 | orchestrator | 2026-01-05 03:52:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:03.765968 | orchestrator | 2026-01-05 03:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:06.816667 | orchestrator | 2026-01-05 03:52:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:06.818255 | orchestrator | 2026-01-05 03:52:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:06.818687 | orchestrator | 2026-01-05 03:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:09.872253 | orchestrator | 2026-01-05 03:52:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:09.874183 | orchestrator | 2026-01-05 03:52:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:09.874696 | orchestrator | 2026-01-05 03:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:12.923742 | orchestrator | 2026-01-05 03:52:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:12.926989 | orchestrator | 2026-01-05 03:52:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:12.927046 | orchestrator | 2026-01-05 03:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:15.973452 | orchestrator | 2026-01-05 03:52:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:15.975314 | orchestrator | 2026-01-05 03:52:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:15.975485 | orchestrator | 2026-01-05 03:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:19.042278 | orchestrator | 2026-01-05 03:52:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:19.044260 | orchestrator | 2026-01-05 03:52:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:19.044315 | orchestrator | 2026-01-05 03:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:22.096711 | orchestrator | 2026-01-05 03:52:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:22.099603 | orchestrator | 2026-01-05 03:52:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:22.099643 | orchestrator | 2026-01-05 03:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:25.144488 | orchestrator | 2026-01-05 03:52:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:25.149234 | orchestrator | 2026-01-05 03:52:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:25.149315 | orchestrator | 2026-01-05 03:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:28.206671 | orchestrator | 2026-01-05 03:52:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:28.207622 | orchestrator | 2026-01-05 03:52:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:28.207826 | orchestrator | 2026-01-05 03:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:31.264091 | orchestrator | 2026-01-05 03:52:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:31.266986 | orchestrator | 2026-01-05 03:52:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:31.267084 | orchestrator | 2026-01-05 03:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:34.315520 | orchestrator | 2026-01-05 03:52:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:34.318229 | orchestrator | 2026-01-05 03:52:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:34.318296 | orchestrator | 2026-01-05 03:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:37.357059 | orchestrator | 2026-01-05 03:52:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:37.359694 | orchestrator | 2026-01-05 03:52:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:37.359759 | orchestrator | 2026-01-05 03:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:40.410800 | orchestrator | 2026-01-05 03:52:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:40.412651 | orchestrator | 2026-01-05 03:52:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:40.412711 | orchestrator | 2026-01-05 03:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:43.461014 | orchestrator | 2026-01-05 03:52:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:43.462131 | orchestrator | 2026-01-05 03:52:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:43.462166 | orchestrator | 2026-01-05 03:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:46.505718 | orchestrator | 2026-01-05 03:52:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:46.506877 | orchestrator | 2026-01-05 03:52:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:46.506916 | orchestrator | 2026-01-05 03:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:49.549196 | orchestrator | 2026-01-05 03:52:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:49.550837 | orchestrator | 2026-01-05 03:52:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:49.550896 | orchestrator | 2026-01-05 03:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:52.603131 | orchestrator | 2026-01-05 03:52:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:52.605346 | orchestrator | 2026-01-05 03:52:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:52.605433 | orchestrator | 2026-01-05 03:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:55.646869 | orchestrator | 2026-01-05 03:52:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:55.648279 | orchestrator | 2026-01-05 03:52:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:55.648379 | orchestrator | 2026-01-05 03:52:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:58.697478 | orchestrator | 2026-01-05 03:52:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:52:58.700501 | orchestrator | 2026-01-05 03:52:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:52:58.700559 | orchestrator | 2026-01-05 03:52:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:01.756297 | orchestrator | 2026-01-05 03:53:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:01.758102 | orchestrator | 2026-01-05 03:53:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:01.758263 | orchestrator | 2026-01-05 03:53:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:04.809170 | orchestrator | 2026-01-05 03:53:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:04.812104 | orchestrator | 2026-01-05 03:53:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:04.812239 | orchestrator | 2026-01-05 03:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:07.873516 | orchestrator | 2026-01-05 03:53:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:07.874961 | orchestrator | 2026-01-05 03:53:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:07.875053 | orchestrator | 2026-01-05 03:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:10.929402 | orchestrator | 2026-01-05 03:53:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:10.930650 | orchestrator | 2026-01-05 03:53:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:10.930748 | orchestrator | 2026-01-05 03:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:13.991187 | orchestrator | 2026-01-05 03:53:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:13.991875 | orchestrator | 2026-01-05 03:53:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:13.991940 | orchestrator | 2026-01-05 03:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:17.043143 | orchestrator | 2026-01-05 03:53:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:17.044914 | orchestrator | 2026-01-05 03:53:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:17.044977 | orchestrator | 2026-01-05 03:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:20.105223 | orchestrator | 2026-01-05 03:53:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:20.106883 | orchestrator | 2026-01-05 03:53:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:20.106930 | orchestrator | 2026-01-05 03:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:23.156798 | orchestrator | 2026-01-05 03:53:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:23.158243 | orchestrator | 2026-01-05 03:53:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:23.158412 | orchestrator | 2026-01-05 03:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:26.205666 | orchestrator | 2026-01-05 03:53:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:26.206798 | orchestrator | 2026-01-05 03:53:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:26.206848 | orchestrator | 2026-01-05 03:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:29.260999 | orchestrator | 2026-01-05 03:53:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:29.261552 | orchestrator | 2026-01-05 03:53:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:29.261738 | orchestrator | 2026-01-05 03:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:32.314295 | orchestrator | 2026-01-05 03:53:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:32.316684 | orchestrator | 2026-01-05 03:53:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:32.316714 | orchestrator | 2026-01-05 03:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:35.363909 | orchestrator | 2026-01-05 03:53:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:35.365801 | orchestrator | 2026-01-05 03:53:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:35.365933 | orchestrator | 2026-01-05 03:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:38.417321 | orchestrator | 2026-01-05 03:53:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:38.419063 | orchestrator | 2026-01-05 03:53:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:38.419247 | orchestrator | 2026-01-05 03:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:41.470359 | orchestrator | 2026-01-05 03:53:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:41.471983 | orchestrator | 2026-01-05 03:53:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:41.472061 | orchestrator | 2026-01-05 03:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:44.522761 | orchestrator | 2026-01-05 03:53:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:44.524742 | orchestrator | 2026-01-05 03:53:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:44.524867 | orchestrator | 2026-01-05 03:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:47.572155 | orchestrator | 2026-01-05 03:53:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:47.574652 | orchestrator | 2026-01-05 03:53:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:47.574796 | orchestrator | 2026-01-05 03:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:50.620026 | orchestrator | 2026-01-05 03:53:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:50.621929 | orchestrator | 2026-01-05 03:53:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:50.621971 | orchestrator | 2026-01-05 03:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:53.673361 | orchestrator | 2026-01-05 03:53:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:53.675732 | orchestrator | 2026-01-05 03:53:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:53.675987 | orchestrator | 2026-01-05 03:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:56.721991 | orchestrator | 2026-01-05 03:53:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:56.722754 | orchestrator | 2026-01-05 03:53:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:56.722813 | orchestrator | 2026-01-05 03:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:59.772836 | orchestrator | 2026-01-05 03:53:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:53:59.775122 | orchestrator | 2026-01-05 03:53:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:53:59.775235 | orchestrator | 2026-01-05 03:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:02.826145 | orchestrator | 2026-01-05 03:54:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:02.827906 | orchestrator | 2026-01-05 03:54:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:02.827978 | orchestrator | 2026-01-05 03:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:05.882385 | orchestrator | 2026-01-05 03:54:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:05.883958 | orchestrator | 2026-01-05 03:54:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:05.884036 | orchestrator | 2026-01-05 03:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:08.930059 | orchestrator | 2026-01-05 03:54:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:08.932262 | orchestrator | 2026-01-05 03:54:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:08.932312 | orchestrator | 2026-01-05 03:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:11.986348 | orchestrator | 2026-01-05 03:54:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:11.987930 | orchestrator | 2026-01-05 03:54:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:11.987996 | orchestrator | 2026-01-05 03:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:15.043678 | orchestrator | 2026-01-05 03:54:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:15.045416 | orchestrator | 2026-01-05 03:54:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:15.045488 | orchestrator | 2026-01-05 03:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:18.103717 | orchestrator | 2026-01-05 03:54:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:18.106754 | orchestrator | 2026-01-05 03:54:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:18.106851 | orchestrator | 2026-01-05 03:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:21.152918 | orchestrator | 2026-01-05 03:54:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:21.154537 | orchestrator | 2026-01-05 03:54:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:21.154678 | orchestrator | 2026-01-05 03:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:24.196642 | orchestrator | 2026-01-05 03:54:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:24.199114 | orchestrator | 2026-01-05 03:54:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:24.199226 | orchestrator | 2026-01-05 03:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:27.245505 | orchestrator | 2026-01-05 03:54:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:27.247149 | orchestrator | 2026-01-05 03:54:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:27.247228 | orchestrator | 2026-01-05 03:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:30.289854 | orchestrator | 2026-01-05 03:54:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:30.292987 | orchestrator | 2026-01-05 03:54:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:30.293054 | orchestrator | 2026-01-05 03:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:33.344568 | orchestrator | 2026-01-05 03:54:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:33.345818 | orchestrator | 2026-01-05 03:54:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:33.345844 | orchestrator | 2026-01-05 03:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:36.391204 | orchestrator | 2026-01-05 03:54:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:36.392962 | orchestrator | 2026-01-05 03:54:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:36.393041 | orchestrator | 2026-01-05 03:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:39.435763 | orchestrator | 2026-01-05 03:54:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:39.437457 | orchestrator | 2026-01-05 03:54:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:39.437518 | orchestrator | 2026-01-05 03:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:42.488178 | orchestrator | 2026-01-05 03:54:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:42.490568 | orchestrator | 2026-01-05 03:54:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:42.490649 | orchestrator | 2026-01-05 03:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:45.542087 | orchestrator | 2026-01-05 03:54:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:45.545197 | orchestrator | 2026-01-05 03:54:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:45.545236 | orchestrator | 2026-01-05 03:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:48.598272 | orchestrator | 2026-01-05 03:54:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:48.600217 | orchestrator | 2026-01-05 03:54:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:48.600283 | orchestrator | 2026-01-05 03:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:51.645076 | orchestrator | 2026-01-05 03:54:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:51.647095 | orchestrator | 2026-01-05 03:54:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:51.647138 | orchestrator | 2026-01-05 03:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:54.692877 | orchestrator | 2026-01-05 03:54:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:54.693887 | orchestrator | 2026-01-05 03:54:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:54.693980 | orchestrator | 2026-01-05 03:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:57.737470 | orchestrator | 2026-01-05 03:54:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:54:57.738823 | orchestrator | 2026-01-05 03:54:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:54:57.739085 | orchestrator | 2026-01-05 03:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:00.787848 | orchestrator | 2026-01-05 03:55:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:00.789778 | orchestrator | 2026-01-05 03:55:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:00.863803 | orchestrator | 2026-01-05 03:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:03.843854 | orchestrator | 2026-01-05 03:55:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:03.845951 | orchestrator | 2026-01-05 03:55:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:03.846090 | orchestrator | 2026-01-05 03:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:06.888729 | orchestrator | 2026-01-05 03:55:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:06.890718 | orchestrator | 2026-01-05 03:55:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:06.890788 | orchestrator | 2026-01-05 03:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:09.932173 | orchestrator | 2026-01-05 03:55:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:09.934379 | orchestrator | 2026-01-05 03:55:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:09.934436 | orchestrator | 2026-01-05 03:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:12.984048 | orchestrator | 2026-01-05 03:55:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:12.986935 | orchestrator | 2026-01-05 03:55:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:12.986992 | orchestrator | 2026-01-05 03:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:16.040174 | orchestrator | 2026-01-05 03:55:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:16.040824 | orchestrator | 2026-01-05 03:55:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:16.040854 | orchestrator | 2026-01-05 03:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:19.095135 | orchestrator | 2026-01-05 03:55:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:19.097884 | orchestrator | 2026-01-05 03:55:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:19.097965 | orchestrator | 2026-01-05 03:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:22.144291 | orchestrator | 2026-01-05 03:55:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:22.146363 | orchestrator | 2026-01-05 03:55:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:22.147743 | orchestrator | 2026-01-05 03:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:25.199282 | orchestrator | 2026-01-05 03:55:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:25.200373 | orchestrator | 2026-01-05 03:55:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:25.200407 | orchestrator | 2026-01-05 03:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:28.256157 | orchestrator | 2026-01-05 03:55:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:28.259336 | orchestrator | 2026-01-05 03:55:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:28.260479 | orchestrator | 2026-01-05 03:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:31.316161 | orchestrator | 2026-01-05 03:55:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:31.319027 | orchestrator | 2026-01-05 03:55:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:31.319090 | orchestrator | 2026-01-05 03:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:34.372455 | orchestrator | 2026-01-05 03:55:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:34.373137 | orchestrator | 2026-01-05 03:55:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:34.373334 | orchestrator | 2026-01-05 03:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:37.420922 | orchestrator | 2026-01-05 03:55:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:37.424151 | orchestrator | 2026-01-05 03:55:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:37.424206 | orchestrator | 2026-01-05 03:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:40.472331 | orchestrator | 2026-01-05 03:55:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:40.473553 | orchestrator | 2026-01-05 03:55:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:40.473615 | orchestrator | 2026-01-05 03:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:43.527168 | orchestrator | 2026-01-05 03:55:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:43.529194 | orchestrator | 2026-01-05 03:55:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:43.529258 | orchestrator | 2026-01-05 03:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:46.579945 | orchestrator | 2026-01-05 03:55:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:46.585865 | orchestrator | 2026-01-05 03:55:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:46.585953 | orchestrator | 2026-01-05 03:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:49.636350 | orchestrator | 2026-01-05 03:55:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:49.636901 | orchestrator | 2026-01-05 03:55:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:49.636946 | orchestrator | 2026-01-05 03:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:52.679856 | orchestrator | 2026-01-05 03:55:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:52.682550 | orchestrator | 2026-01-05 03:55:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:52.682740 | orchestrator | 2026-01-05 03:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:55.724116 | orchestrator | 2026-01-05 03:55:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:55.726251 | orchestrator | 2026-01-05 03:55:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:55.726318 | orchestrator | 2026-01-05 03:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:58.772457 | orchestrator | 2026-01-05 03:55:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:55:58.773883 | orchestrator | 2026-01-05 03:55:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:55:58.773926 | orchestrator | 2026-01-05 03:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:01.824756 | orchestrator | 2026-01-05 03:56:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:01.827131 | orchestrator | 2026-01-05 03:56:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:01.827189 | orchestrator | 2026-01-05 03:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:04.880332 | orchestrator | 2026-01-05 03:56:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:04.881384 | orchestrator | 2026-01-05 03:56:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:04.881432 | orchestrator | 2026-01-05 03:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:07.934291 | orchestrator | 2026-01-05 03:56:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:07.936979 | orchestrator | 2026-01-05 03:56:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:07.937118 | orchestrator | 2026-01-05 03:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:10.985603 | orchestrator | 2026-01-05 03:56:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:10.986694 | orchestrator | 2026-01-05 03:56:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:10.986832 | orchestrator | 2026-01-05 03:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:14.037460 | orchestrator | 2026-01-05 03:56:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:14.039328 | orchestrator | 2026-01-05 03:56:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:14.039381 | orchestrator | 2026-01-05 03:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:17.080547 | orchestrator | 2026-01-05 03:56:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:17.081144 | orchestrator | 2026-01-05 03:56:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:17.081181 | orchestrator | 2026-01-05 03:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:20.128630 | orchestrator | 2026-01-05 03:56:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:20.131043 | orchestrator | 2026-01-05 03:56:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:20.131098 | orchestrator | 2026-01-05 03:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:23.180016 | orchestrator | 2026-01-05 03:56:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:23.184154 | orchestrator | 2026-01-05 03:56:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:23.184209 | orchestrator | 2026-01-05 03:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:26.234046 | orchestrator | 2026-01-05 03:56:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:26.235656 | orchestrator | 2026-01-05 03:56:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:26.235680 | orchestrator | 2026-01-05 03:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:29.288425 | orchestrator | 2026-01-05 03:56:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:29.290330 | orchestrator | 2026-01-05 03:56:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:29.290382 | orchestrator | 2026-01-05 03:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:32.345298 | orchestrator | 2026-01-05 03:56:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:32.347011 | orchestrator | 2026-01-05 03:56:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:32.347080 | orchestrator | 2026-01-05 03:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:35.406393 | orchestrator | 2026-01-05 03:56:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:35.407078 | orchestrator | 2026-01-05 03:56:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:35.407815 | orchestrator | 2026-01-05 03:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:38.462950 | orchestrator | 2026-01-05 03:56:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:38.464070 | orchestrator | 2026-01-05 03:56:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:38.464585 | orchestrator | 2026-01-05 03:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:41.513370 | orchestrator | 2026-01-05 03:56:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:41.515201 | orchestrator | 2026-01-05 03:56:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:41.515263 | orchestrator | 2026-01-05 03:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:44.562654 | orchestrator | 2026-01-05 03:56:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:44.564351 | orchestrator | 2026-01-05 03:56:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:44.564445 | orchestrator | 2026-01-05 03:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:47.614404 | orchestrator | 2026-01-05 03:56:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:47.616120 | orchestrator | 2026-01-05 03:56:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:47.616429 | orchestrator | 2026-01-05 03:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:50.664944 | orchestrator | 2026-01-05 03:56:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:50.668054 | orchestrator | 2026-01-05 03:56:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:50.668117 | orchestrator | 2026-01-05 03:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:53.716621 | orchestrator | 2026-01-05 03:56:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:53.717271 | orchestrator | 2026-01-05 03:56:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:53.717316 | orchestrator | 2026-01-05 03:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:56.764061 | orchestrator | 2026-01-05 03:56:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:56.765996 | orchestrator | 2026-01-05 03:56:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:56.766102 | orchestrator | 2026-01-05 03:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:59.817355 | orchestrator | 2026-01-05 03:56:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:56:59.819532 | orchestrator | 2026-01-05 03:56:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:56:59.819640 | orchestrator | 2026-01-05 03:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:02.866854 | orchestrator | 2026-01-05 03:57:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:02.867875 | orchestrator | 2026-01-05 03:57:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:02.867911 | orchestrator | 2026-01-05 03:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:05.914215 | orchestrator | 2026-01-05 03:57:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:05.915330 | orchestrator | 2026-01-05 03:57:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:05.915478 | orchestrator | 2026-01-05 03:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:08.969600 | orchestrator | 2026-01-05 03:57:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:08.971654 | orchestrator | 2026-01-05 03:57:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:08.971791 | orchestrator | 2026-01-05 03:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:12.020986 | orchestrator | 2026-01-05 03:57:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:12.022946 | orchestrator | 2026-01-05 03:57:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:12.023158 | orchestrator | 2026-01-05 03:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:15.070336 | orchestrator | 2026-01-05 03:57:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:15.071931 | orchestrator | 2026-01-05 03:57:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:15.071969 | orchestrator | 2026-01-05 03:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:18.124148 | orchestrator | 2026-01-05 03:57:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:18.126210 | orchestrator | 2026-01-05 03:57:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:18.126539 | orchestrator | 2026-01-05 03:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:21.180395 | orchestrator | 2026-01-05 03:57:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:21.180798 | orchestrator | 2026-01-05 03:57:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:21.180829 | orchestrator | 2026-01-05 03:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:24.236010 | orchestrator | 2026-01-05 03:57:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:24.237194 | orchestrator | 2026-01-05 03:57:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:24.237325 | orchestrator | 2026-01-05 03:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:27.286297 | orchestrator | 2026-01-05 03:57:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:27.288412 | orchestrator | 2026-01-05 03:57:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:27.288565 | orchestrator | 2026-01-05 03:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:30.338971 | orchestrator | 2026-01-05 03:57:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:30.342379 | orchestrator | 2026-01-05 03:57:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:30.342461 | orchestrator | 2026-01-05 03:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:33.386629 | orchestrator | 2026-01-05 03:57:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:33.388421 | orchestrator | 2026-01-05 03:57:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:33.388536 | orchestrator | 2026-01-05 03:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:36.431499 | orchestrator | 2026-01-05 03:57:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:36.432679 | orchestrator | 2026-01-05 03:57:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:36.432710 | orchestrator | 2026-01-05 03:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:39.484436 | orchestrator | 2026-01-05 03:57:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:39.487118 | orchestrator | 2026-01-05 03:57:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:39.487199 | orchestrator | 2026-01-05 03:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:42.539540 | orchestrator | 2026-01-05 03:57:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:42.541075 | orchestrator | 2026-01-05 03:57:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:42.541124 | orchestrator | 2026-01-05 03:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:45.594767 | orchestrator | 2026-01-05 03:57:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:45.596211 | orchestrator | 2026-01-05 03:57:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:45.596301 | orchestrator | 2026-01-05 03:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:48.643159 | orchestrator | 2026-01-05 03:57:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:48.644568 | orchestrator | 2026-01-05 03:57:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:48.644601 | orchestrator | 2026-01-05 03:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:51.692723 | orchestrator | 2026-01-05 03:57:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:51.693760 | orchestrator | 2026-01-05 03:57:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:51.693840 | orchestrator | 2026-01-05 03:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:54.741370 | orchestrator | 2026-01-05 03:57:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:54.742923 | orchestrator | 2026-01-05 03:57:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:54.742978 | orchestrator | 2026-01-05 03:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:57.787155 | orchestrator | 2026-01-05 03:57:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:57:57.789852 | orchestrator | 2026-01-05 03:57:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:57:57.789921 | orchestrator | 2026-01-05 03:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:00.837737 | orchestrator | 2026-01-05 03:58:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:00.839738 | orchestrator | 2026-01-05 03:58:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:00.839787 | orchestrator | 2026-01-05 03:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:03.890115 | orchestrator | 2026-01-05 03:58:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:03.891660 | orchestrator | 2026-01-05 03:58:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:03.891689 | orchestrator | 2026-01-05 03:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:06.945157 | orchestrator | 2026-01-05 03:58:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:06.949654 | orchestrator | 2026-01-05 03:58:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:06.949718 | orchestrator | 2026-01-05 03:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:09.998596 | orchestrator | 2026-01-05 03:58:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:10.000127 | orchestrator | 2026-01-05 03:58:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:10.000212 | orchestrator | 2026-01-05 03:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:13.047198 | orchestrator | 2026-01-05 03:58:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:13.048705 | orchestrator | 2026-01-05 03:58:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:13.048734 | orchestrator | 2026-01-05 03:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:16.095706 | orchestrator | 2026-01-05 03:58:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:16.098001 | orchestrator | 2026-01-05 03:58:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:16.098164 | orchestrator | 2026-01-05 03:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:19.143455 | orchestrator | 2026-01-05 03:58:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:19.144957 | orchestrator | 2026-01-05 03:58:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:19.145024 | orchestrator | 2026-01-05 03:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:22.189537 | orchestrator | 2026-01-05 03:58:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:22.192357 | orchestrator | 2026-01-05 03:58:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:22.192436 | orchestrator | 2026-01-05 03:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:25.239486 | orchestrator | 2026-01-05 03:58:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:25.240204 | orchestrator | 2026-01-05 03:58:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:25.240263 | orchestrator | 2026-01-05 03:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:28.277961 | orchestrator | 2026-01-05 03:58:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:28.278910 | orchestrator | 2026-01-05 03:58:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:28.278974 | orchestrator | 2026-01-05 03:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:31.318124 | orchestrator | 2026-01-05 03:58:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:31.319136 | orchestrator | 2026-01-05 03:58:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:31.319174 | orchestrator | 2026-01-05 03:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:34.364240 | orchestrator | 2026-01-05 03:58:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:34.365672 | orchestrator | 2026-01-05 03:58:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:34.365698 | orchestrator | 2026-01-05 03:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:37.416688 | orchestrator | 2026-01-05 03:58:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:37.418229 | orchestrator | 2026-01-05 03:58:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:37.418276 | orchestrator | 2026-01-05 03:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:40.476756 | orchestrator | 2026-01-05 03:58:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:40.478323 | orchestrator | 2026-01-05 03:58:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:40.478510 | orchestrator | 2026-01-05 03:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:43.529615 | orchestrator | 2026-01-05 03:58:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:43.532216 | orchestrator | 2026-01-05 03:58:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:43.532279 | orchestrator | 2026-01-05 03:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:46.574444 | orchestrator | 2026-01-05 03:58:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:46.574551 | orchestrator | 2026-01-05 03:58:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:46.574566 | orchestrator | 2026-01-05 03:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:49.616633 | orchestrator | 2026-01-05 03:58:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:49.617918 | orchestrator | 2026-01-05 03:58:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:49.617985 | orchestrator | 2026-01-05 03:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:52.673595 | orchestrator | 2026-01-05 03:58:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:52.813784 | orchestrator | 2026-01-05 03:58:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:52.813835 | orchestrator | 2026-01-05 03:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:55.725302 | orchestrator | 2026-01-05 03:58:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:55.725882 | orchestrator | 2026-01-05 03:58:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:55.725938 | orchestrator | 2026-01-05 03:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:58.779076 | orchestrator | 2026-01-05 03:58:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:58:58.780587 | orchestrator | 2026-01-05 03:58:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:58:58.780676 | orchestrator | 2026-01-05 03:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:01.828138 | orchestrator | 2026-01-05 03:59:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:01.832550 | orchestrator | 2026-01-05 03:59:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:01.832620 | orchestrator | 2026-01-05 03:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:04.882423 | orchestrator | 2026-01-05 03:59:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:04.884546 | orchestrator | 2026-01-05 03:59:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:04.884604 | orchestrator | 2026-01-05 03:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:07.922172 | orchestrator | 2026-01-05 03:59:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:07.923211 | orchestrator | 2026-01-05 03:59:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:07.923283 | orchestrator | 2026-01-05 03:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:10.974782 | orchestrator | 2026-01-05 03:59:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:10.976539 | orchestrator | 2026-01-05 03:59:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:10.976687 | orchestrator | 2026-01-05 03:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:14.031709 | orchestrator | 2026-01-05 03:59:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:14.034763 | orchestrator | 2026-01-05 03:59:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:14.035599 | orchestrator | 2026-01-05 03:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:17.092292 | orchestrator | 2026-01-05 03:59:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:17.094636 | orchestrator | 2026-01-05 03:59:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:17.094685 | orchestrator | 2026-01-05 03:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:20.145430 | orchestrator | 2026-01-05 03:59:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:20.146703 | orchestrator | 2026-01-05 03:59:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:20.146792 | orchestrator | 2026-01-05 03:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:23.198856 | orchestrator | 2026-01-05 03:59:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:23.200484 | orchestrator | 2026-01-05 03:59:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:23.200556 | orchestrator | 2026-01-05 03:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:26.252725 | orchestrator | 2026-01-05 03:59:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:26.254168 | orchestrator | 2026-01-05 03:59:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:26.254644 | orchestrator | 2026-01-05 03:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:29.305866 | orchestrator | 2026-01-05 03:59:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:29.307258 | orchestrator | 2026-01-05 03:59:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:29.307292 | orchestrator | 2026-01-05 03:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:32.355038 | orchestrator | 2026-01-05 03:59:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:32.356997 | orchestrator | 2026-01-05 03:59:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:32.357074 | orchestrator | 2026-01-05 03:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:35.411055 | orchestrator | 2026-01-05 03:59:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:35.412155 | orchestrator | 2026-01-05 03:59:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:35.412209 | orchestrator | 2026-01-05 03:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:38.456740 | orchestrator | 2026-01-05 03:59:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:38.458364 | orchestrator | 2026-01-05 03:59:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:38.458460 | orchestrator | 2026-01-05 03:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:41.515148 | orchestrator | 2026-01-05 03:59:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:41.516747 | orchestrator | 2026-01-05 03:59:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:41.516814 | orchestrator | 2026-01-05 03:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:44.564474 | orchestrator | 2026-01-05 03:59:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:44.566160 | orchestrator | 2026-01-05 03:59:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:44.566195 | orchestrator | 2026-01-05 03:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:47.622412 | orchestrator | 2026-01-05 03:59:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:47.624831 | orchestrator | 2026-01-05 03:59:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:47.624897 | orchestrator | 2026-01-05 03:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:50.677745 | orchestrator | 2026-01-05 03:59:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:50.680127 | orchestrator | 2026-01-05 03:59:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:50.680573 | orchestrator | 2026-01-05 03:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:53.730631 | orchestrator | 2026-01-05 03:59:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:53.732901 | orchestrator | 2026-01-05 03:59:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:53.733062 | orchestrator | 2026-01-05 03:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:56.788778 | orchestrator | 2026-01-05 03:59:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:56.790250 | orchestrator | 2026-01-05 03:59:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:56.790331 | orchestrator | 2026-01-05 03:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:59.837535 | orchestrator | 2026-01-05 03:59:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 03:59:59.839768 | orchestrator | 2026-01-05 03:59:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 03:59:59.839952 | orchestrator | 2026-01-05 03:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:02.883179 | orchestrator | 2026-01-05 04:00:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:02.884411 | orchestrator | 2026-01-05 04:00:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:02.884495 | orchestrator | 2026-01-05 04:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:05.937361 | orchestrator | 2026-01-05 04:00:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:05.938779 | orchestrator | 2026-01-05 04:00:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:05.938870 | orchestrator | 2026-01-05 04:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:08.988345 | orchestrator | 2026-01-05 04:00:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:08.990644 | orchestrator | 2026-01-05 04:00:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:08.990695 | orchestrator | 2026-01-05 04:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:12.039080 | orchestrator | 2026-01-05 04:00:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:12.040301 | orchestrator | 2026-01-05 04:00:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:12.040363 | orchestrator | 2026-01-05 04:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:15.090234 | orchestrator | 2026-01-05 04:00:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:15.092168 | orchestrator | 2026-01-05 04:00:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:15.092256 | orchestrator | 2026-01-05 04:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:18.146701 | orchestrator | 2026-01-05 04:00:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:18.148938 | orchestrator | 2026-01-05 04:00:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:18.149063 | orchestrator | 2026-01-05 04:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:21.203481 | orchestrator | 2026-01-05 04:00:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:21.204759 | orchestrator | 2026-01-05 04:00:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:21.204820 | orchestrator | 2026-01-05 04:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:24.254118 | orchestrator | 2026-01-05 04:00:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:24.255322 | orchestrator | 2026-01-05 04:00:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:24.255369 | orchestrator | 2026-01-05 04:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:27.310009 | orchestrator | 2026-01-05 04:00:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:27.311757 | orchestrator | 2026-01-05 04:00:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:27.311837 | orchestrator | 2026-01-05 04:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:30.362591 | orchestrator | 2026-01-05 04:00:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:30.364324 | orchestrator | 2026-01-05 04:00:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:30.364378 | orchestrator | 2026-01-05 04:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:33.413692 | orchestrator | 2026-01-05 04:00:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:33.415269 | orchestrator | 2026-01-05 04:00:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:33.415494 | orchestrator | 2026-01-05 04:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:36.467445 | orchestrator | 2026-01-05 04:00:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:36.469044 | orchestrator | 2026-01-05 04:00:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:36.469124 | orchestrator | 2026-01-05 04:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:39.515710 | orchestrator | 2026-01-05 04:00:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:39.517662 | orchestrator | 2026-01-05 04:00:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:39.517686 | orchestrator | 2026-01-05 04:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:42.559769 | orchestrator | 2026-01-05 04:00:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:42.561074 | orchestrator | 2026-01-05 04:00:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:42.561123 | orchestrator | 2026-01-05 04:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:45.613539 | orchestrator | 2026-01-05 04:00:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:45.616260 | orchestrator | 2026-01-05 04:00:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:45.616302 | orchestrator | 2026-01-05 04:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:48.665759 | orchestrator | 2026-01-05 04:00:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:48.667660 | orchestrator | 2026-01-05 04:00:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:48.667710 | orchestrator | 2026-01-05 04:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:51.718492 | orchestrator | 2026-01-05 04:00:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:51.720344 | orchestrator | 2026-01-05 04:00:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:51.720404 | orchestrator | 2026-01-05 04:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:54.767072 | orchestrator | 2026-01-05 04:00:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:54.770408 | orchestrator | 2026-01-05 04:00:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:54.770504 | orchestrator | 2026-01-05 04:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:57.826483 | orchestrator | 2026-01-05 04:00:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:00:57.829207 | orchestrator | 2026-01-05 04:00:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:00:57.829264 | orchestrator | 2026-01-05 04:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:00.874375 | orchestrator | 2026-01-05 04:01:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:00.874982 | orchestrator | 2026-01-05 04:01:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:00.875097 | orchestrator | 2026-01-05 04:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:03.920962 | orchestrator | 2026-01-05 04:01:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:03.923133 | orchestrator | 2026-01-05 04:01:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:03.923192 | orchestrator | 2026-01-05 04:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:06.976264 | orchestrator | 2026-01-05 04:01:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:06.978150 | orchestrator | 2026-01-05 04:01:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:06.978206 | orchestrator | 2026-01-05 04:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:10.029606 | orchestrator | 2026-01-05 04:01:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:10.031052 | orchestrator | 2026-01-05 04:01:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:10.031310 | orchestrator | 2026-01-05 04:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:13.084236 | orchestrator | 2026-01-05 04:01:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:13.085594 | orchestrator | 2026-01-05 04:01:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:13.085646 | orchestrator | 2026-01-05 04:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:16.137392 | orchestrator | 2026-01-05 04:01:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:16.140858 | orchestrator | 2026-01-05 04:01:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:16.140948 | orchestrator | 2026-01-05 04:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:19.185787 | orchestrator | 2026-01-05 04:01:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:19.187285 | orchestrator | 2026-01-05 04:01:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:19.187347 | orchestrator | 2026-01-05 04:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:22.248299 | orchestrator | 2026-01-05 04:01:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:22.251655 | orchestrator | 2026-01-05 04:01:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:22.251748 | orchestrator | 2026-01-05 04:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:25.303951 | orchestrator | 2026-01-05 04:01:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:25.306102 | orchestrator | 2026-01-05 04:01:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:25.306137 | orchestrator | 2026-01-05 04:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:28.352536 | orchestrator | 2026-01-05 04:01:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:28.354302 | orchestrator | 2026-01-05 04:01:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:28.354365 | orchestrator | 2026-01-05 04:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:31.398981 | orchestrator | 2026-01-05 04:01:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:31.400003 | orchestrator | 2026-01-05 04:01:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:31.400231 | orchestrator | 2026-01-05 04:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:34.450415 | orchestrator | 2026-01-05 04:01:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:34.451957 | orchestrator | 2026-01-05 04:01:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:34.452163 | orchestrator | 2026-01-05 04:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:37.506641 | orchestrator | 2026-01-05 04:01:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:37.510112 | orchestrator | 2026-01-05 04:01:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:37.510289 | orchestrator | 2026-01-05 04:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:40.566180 | orchestrator | 2026-01-05 04:01:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:40.568586 | orchestrator | 2026-01-05 04:01:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:40.568618 | orchestrator | 2026-01-05 04:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:43.619845 | orchestrator | 2026-01-05 04:01:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:43.621315 | orchestrator | 2026-01-05 04:01:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:43.621377 | orchestrator | 2026-01-05 04:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:46.675408 | orchestrator | 2026-01-05 04:01:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:46.676639 | orchestrator | 2026-01-05 04:01:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:46.676712 | orchestrator | 2026-01-05 04:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:49.724701 | orchestrator | 2026-01-05 04:01:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:49.726367 | orchestrator | 2026-01-05 04:01:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:49.726462 | orchestrator | 2026-01-05 04:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:52.777576 | orchestrator | 2026-01-05 04:01:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:52.777777 | orchestrator | 2026-01-05 04:01:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:52.777819 | orchestrator | 2026-01-05 04:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:55.825193 | orchestrator | 2026-01-05 04:01:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:55.825899 | orchestrator | 2026-01-05 04:01:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:55.825954 | orchestrator | 2026-01-05 04:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:58.871187 | orchestrator | 2026-01-05 04:01:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:01:58.873626 | orchestrator | 2026-01-05 04:01:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:01:58.874118 | orchestrator | 2026-01-05 04:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:01.922806 | orchestrator | 2026-01-05 04:02:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:01.924837 | orchestrator | 2026-01-05 04:02:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:01.925015 | orchestrator | 2026-01-05 04:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:04.969551 | orchestrator | 2026-01-05 04:02:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:04.971627 | orchestrator | 2026-01-05 04:02:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:04.971691 | orchestrator | 2026-01-05 04:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:08.027389 | orchestrator | 2026-01-05 04:02:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:08.027533 | orchestrator | 2026-01-05 04:02:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:08.027562 | orchestrator | 2026-01-05 04:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:11.073528 | orchestrator | 2026-01-05 04:02:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:11.074817 | orchestrator | 2026-01-05 04:02:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:11.074858 | orchestrator | 2026-01-05 04:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:14.118877 | orchestrator | 2026-01-05 04:02:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:14.121362 | orchestrator | 2026-01-05 04:02:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:14.121458 | orchestrator | 2026-01-05 04:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:17.162566 | orchestrator | 2026-01-05 04:02:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:17.165754 | orchestrator | 2026-01-05 04:02:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:17.165915 | orchestrator | 2026-01-05 04:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:20.213895 | orchestrator | 2026-01-05 04:02:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:20.215078 | orchestrator | 2026-01-05 04:02:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:20.215183 | orchestrator | 2026-01-05 04:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:23.267569 | orchestrator | 2026-01-05 04:02:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:23.270513 | orchestrator | 2026-01-05 04:02:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:23.270598 | orchestrator | 2026-01-05 04:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:26.314939 | orchestrator | 2026-01-05 04:02:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:26.317567 | orchestrator | 2026-01-05 04:02:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:26.317636 | orchestrator | 2026-01-05 04:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:29.362253 | orchestrator | 2026-01-05 04:02:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:29.363877 | orchestrator | 2026-01-05 04:02:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:29.363935 | orchestrator | 2026-01-05 04:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:32.413402 | orchestrator | 2026-01-05 04:02:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:32.414314 | orchestrator | 2026-01-05 04:02:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:32.414537 | orchestrator | 2026-01-05 04:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:35.459689 | orchestrator | 2026-01-05 04:02:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:35.459891 | orchestrator | 2026-01-05 04:02:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:35.459917 | orchestrator | 2026-01-05 04:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:38.499382 | orchestrator | 2026-01-05 04:02:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:38.501366 | orchestrator | 2026-01-05 04:02:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:38.501449 | orchestrator | 2026-01-05 04:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:41.544667 | orchestrator | 2026-01-05 04:02:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:41.547072 | orchestrator | 2026-01-05 04:02:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:41.547180 | orchestrator | 2026-01-05 04:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:44.597984 | orchestrator | 2026-01-05 04:02:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:44.599451 | orchestrator | 2026-01-05 04:02:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:44.599520 | orchestrator | 2026-01-05 04:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:47.651963 | orchestrator | 2026-01-05 04:02:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:47.653544 | orchestrator | 2026-01-05 04:02:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:47.653697 | orchestrator | 2026-01-05 04:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:50.707332 | orchestrator | 2026-01-05 04:02:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:50.709605 | orchestrator | 2026-01-05 04:02:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:50.709692 | orchestrator | 2026-01-05 04:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:53.765303 | orchestrator | 2026-01-05 04:02:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:53.766559 | orchestrator | 2026-01-05 04:02:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:53.766617 | orchestrator | 2026-01-05 04:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:56.814566 | orchestrator | 2026-01-05 04:02:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:56.817077 | orchestrator | 2026-01-05 04:02:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:56.817387 | orchestrator | 2026-01-05 04:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:59.871539 | orchestrator | 2026-01-05 04:02:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:02:59.873730 | orchestrator | 2026-01-05 04:02:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:02:59.873778 | orchestrator | 2026-01-05 04:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:02.923321 | orchestrator | 2026-01-05 04:03:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:02.924167 | orchestrator | 2026-01-05 04:03:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:02.924210 | orchestrator | 2026-01-05 04:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:05.968716 | orchestrator | 2026-01-05 04:03:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:05.970204 | orchestrator | 2026-01-05 04:03:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:05.970247 | orchestrator | 2026-01-05 04:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:09.036662 | orchestrator | 2026-01-05 04:03:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:09.166317 | orchestrator | 2026-01-05 04:03:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:09.166414 | orchestrator | 2026-01-05 04:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:12.093605 | orchestrator | 2026-01-05 04:03:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:12.094966 | orchestrator | 2026-01-05 04:03:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:12.094989 | orchestrator | 2026-01-05 04:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:15.149311 | orchestrator | 2026-01-05 04:03:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:15.152270 | orchestrator | 2026-01-05 04:03:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:15.152413 | orchestrator | 2026-01-05 04:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:18.202278 | orchestrator | 2026-01-05 04:03:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:18.204369 | orchestrator | 2026-01-05 04:03:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:18.204473 | orchestrator | 2026-01-05 04:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:21.254559 | orchestrator | 2026-01-05 04:03:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:21.257899 | orchestrator | 2026-01-05 04:03:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:21.258152 | orchestrator | 2026-01-05 04:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:24.312723 | orchestrator | 2026-01-05 04:03:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:24.314622 | orchestrator | 2026-01-05 04:03:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:24.314676 | orchestrator | 2026-01-05 04:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:27.360259 | orchestrator | 2026-01-05 04:03:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:27.361788 | orchestrator | 2026-01-05 04:03:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:27.361902 | orchestrator | 2026-01-05 04:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:30.420113 | orchestrator | 2026-01-05 04:03:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:30.421816 | orchestrator | 2026-01-05 04:03:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:30.421871 | orchestrator | 2026-01-05 04:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:33.476930 | orchestrator | 2026-01-05 04:03:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:33.478118 | orchestrator | 2026-01-05 04:03:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:33.478154 | orchestrator | 2026-01-05 04:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:36.521689 | orchestrator | 2026-01-05 04:03:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:36.522984 | orchestrator | 2026-01-05 04:03:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:36.523084 | orchestrator | 2026-01-05 04:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:39.575524 | orchestrator | 2026-01-05 04:03:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:39.577758 | orchestrator | 2026-01-05 04:03:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:39.577815 | orchestrator | 2026-01-05 04:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:42.628575 | orchestrator | 2026-01-05 04:03:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:42.631164 | orchestrator | 2026-01-05 04:03:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:42.631284 | orchestrator | 2026-01-05 04:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:45.683478 | orchestrator | 2026-01-05 04:03:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:45.685004 | orchestrator | 2026-01-05 04:03:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:45.685053 | orchestrator | 2026-01-05 04:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:48.738395 | orchestrator | 2026-01-05 04:03:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:48.740550 | orchestrator | 2026-01-05 04:03:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:48.740629 | orchestrator | 2026-01-05 04:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:51.795450 | orchestrator | 2026-01-05 04:03:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:51.796987 | orchestrator | 2026-01-05 04:03:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:51.797465 | orchestrator | 2026-01-05 04:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:54.848005 | orchestrator | 2026-01-05 04:03:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:54.850252 | orchestrator | 2026-01-05 04:03:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:54.850308 | orchestrator | 2026-01-05 04:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:57.902791 | orchestrator | 2026-01-05 04:03:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:03:57.904420 | orchestrator | 2026-01-05 04:03:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:03:57.904466 | orchestrator | 2026-01-05 04:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:00.957484 | orchestrator | 2026-01-05 04:04:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:00.958429 | orchestrator | 2026-01-05 04:04:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:00.958462 | orchestrator | 2026-01-05 04:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:04.005258 | orchestrator | 2026-01-05 04:04:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:04.005733 | orchestrator | 2026-01-05 04:04:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:04.005860 | orchestrator | 2026-01-05 04:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:07.065764 | orchestrator | 2026-01-05 04:04:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:07.069517 | orchestrator | 2026-01-05 04:04:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:07.069595 | orchestrator | 2026-01-05 04:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:10.113163 | orchestrator | 2026-01-05 04:04:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:10.113778 | orchestrator | 2026-01-05 04:04:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:10.113893 | orchestrator | 2026-01-05 04:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:13.157472 | orchestrator | 2026-01-05 04:04:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:13.158143 | orchestrator | 2026-01-05 04:04:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:13.158186 | orchestrator | 2026-01-05 04:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:16.201480 | orchestrator | 2026-01-05 04:04:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:16.203345 | orchestrator | 2026-01-05 04:04:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:16.203673 | orchestrator | 2026-01-05 04:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:19.253339 | orchestrator | 2026-01-05 04:04:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:19.254742 | orchestrator | 2026-01-05 04:04:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:19.254774 | orchestrator | 2026-01-05 04:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:22.307037 | orchestrator | 2026-01-05 04:04:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:22.308396 | orchestrator | 2026-01-05 04:04:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:22.308442 | orchestrator | 2026-01-05 04:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:25.359384 | orchestrator | 2026-01-05 04:04:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:25.362109 | orchestrator | 2026-01-05 04:04:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:25.362178 | orchestrator | 2026-01-05 04:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:28.418525 | orchestrator | 2026-01-05 04:04:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:28.420995 | orchestrator | 2026-01-05 04:04:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:28.421069 | orchestrator | 2026-01-05 04:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:31.469630 | orchestrator | 2026-01-05 04:04:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:31.470957 | orchestrator | 2026-01-05 04:04:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:31.470993 | orchestrator | 2026-01-05 04:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:34.518233 | orchestrator | 2026-01-05 04:04:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:34.519891 | orchestrator | 2026-01-05 04:04:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:34.519943 | orchestrator | 2026-01-05 04:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:37.567577 | orchestrator | 2026-01-05 04:04:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:37.568944 | orchestrator | 2026-01-05 04:04:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:37.569087 | orchestrator | 2026-01-05 04:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:40.616486 | orchestrator | 2026-01-05 04:04:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:40.617985 | orchestrator | 2026-01-05 04:04:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:40.618143 | orchestrator | 2026-01-05 04:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:43.664939 | orchestrator | 2026-01-05 04:04:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:43.666449 | orchestrator | 2026-01-05 04:04:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:43.666596 | orchestrator | 2026-01-05 04:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:46.714919 | orchestrator | 2026-01-05 04:04:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:46.715465 | orchestrator | 2026-01-05 04:04:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:46.715495 | orchestrator | 2026-01-05 04:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:49.765765 | orchestrator | 2026-01-05 04:04:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:49.767239 | orchestrator | 2026-01-05 04:04:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:49.767647 | orchestrator | 2026-01-05 04:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:52.819962 | orchestrator | 2026-01-05 04:04:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:52.822111 | orchestrator | 2026-01-05 04:04:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:52.822156 | orchestrator | 2026-01-05 04:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:55.866970 | orchestrator | 2026-01-05 04:04:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:55.867830 | orchestrator | 2026-01-05 04:04:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:55.937843 | orchestrator | 2026-01-05 04:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:58.920974 | orchestrator | 2026-01-05 04:04:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:04:58.922401 | orchestrator | 2026-01-05 04:04:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:04:58.922564 | orchestrator | 2026-01-05 04:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:01.967253 | orchestrator | 2026-01-05 04:05:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:01.968979 | orchestrator | 2026-01-05 04:05:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:01.969060 | orchestrator | 2026-01-05 04:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:05.013884 | orchestrator | 2026-01-05 04:05:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:05.016755 | orchestrator | 2026-01-05 04:05:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:05.016835 | orchestrator | 2026-01-05 04:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:08.063722 | orchestrator | 2026-01-05 04:05:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:08.063859 | orchestrator | 2026-01-05 04:05:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:08.063876 | orchestrator | 2026-01-05 04:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:11.117034 | orchestrator | 2026-01-05 04:05:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:11.119004 | orchestrator | 2026-01-05 04:05:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:11.119147 | orchestrator | 2026-01-05 04:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:14.167900 | orchestrator | 2026-01-05 04:05:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:14.169628 | orchestrator | 2026-01-05 04:05:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:14.169685 | orchestrator | 2026-01-05 04:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:17.218442 | orchestrator | 2026-01-05 04:05:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:17.220420 | orchestrator | 2026-01-05 04:05:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:17.220516 | orchestrator | 2026-01-05 04:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:20.267591 | orchestrator | 2026-01-05 04:05:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:20.268682 | orchestrator | 2026-01-05 04:05:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:20.268807 | orchestrator | 2026-01-05 04:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:23.324042 | orchestrator | 2026-01-05 04:05:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:23.325384 | orchestrator | 2026-01-05 04:05:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:23.325415 | orchestrator | 2026-01-05 04:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:26.368975 | orchestrator | 2026-01-05 04:05:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:26.370554 | orchestrator | 2026-01-05 04:05:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:26.370601 | orchestrator | 2026-01-05 04:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:29.422818 | orchestrator | 2026-01-05 04:05:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:29.424484 | orchestrator | 2026-01-05 04:05:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:29.424500 | orchestrator | 2026-01-05 04:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:32.476395 | orchestrator | 2026-01-05 04:05:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:32.477943 | orchestrator | 2026-01-05 04:05:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:32.477985 | orchestrator | 2026-01-05 04:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:35.530496 | orchestrator | 2026-01-05 04:05:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:35.531842 | orchestrator | 2026-01-05 04:05:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:35.531875 | orchestrator | 2026-01-05 04:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:38.585871 | orchestrator | 2026-01-05 04:05:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:38.588184 | orchestrator | 2026-01-05 04:05:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:38.588263 | orchestrator | 2026-01-05 04:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:41.640289 | orchestrator | 2026-01-05 04:05:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:41.642147 | orchestrator | 2026-01-05 04:05:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:41.642215 | orchestrator | 2026-01-05 04:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:44.692910 | orchestrator | 2026-01-05 04:05:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:44.694196 | orchestrator | 2026-01-05 04:05:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:44.694250 | orchestrator | 2026-01-05 04:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:47.742501 | orchestrator | 2026-01-05 04:05:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:47.744043 | orchestrator | 2026-01-05 04:05:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:47.744255 | orchestrator | 2026-01-05 04:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:50.793452 | orchestrator | 2026-01-05 04:05:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:50.796139 | orchestrator | 2026-01-05 04:05:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:50.796270 | orchestrator | 2026-01-05 04:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:53.844504 | orchestrator | 2026-01-05 04:05:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:53.846015 | orchestrator | 2026-01-05 04:05:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:53.846149 | orchestrator | 2026-01-05 04:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:56.899156 | orchestrator | 2026-01-05 04:05:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:56.900186 | orchestrator | 2026-01-05 04:05:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:56.900463 | orchestrator | 2026-01-05 04:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:59.946125 | orchestrator | 2026-01-05 04:05:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:05:59.947190 | orchestrator | 2026-01-05 04:05:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:05:59.947224 | orchestrator | 2026-01-05 04:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:03.008122 | orchestrator | 2026-01-05 04:06:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:03.011686 | orchestrator | 2026-01-05 04:06:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:03.011816 | orchestrator | 2026-01-05 04:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:06.056426 | orchestrator | 2026-01-05 04:06:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:06.058448 | orchestrator | 2026-01-05 04:06:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:06.058515 | orchestrator | 2026-01-05 04:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:09.110939 | orchestrator | 2026-01-05 04:06:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:09.113272 | orchestrator | 2026-01-05 04:06:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:09.113319 | orchestrator | 2026-01-05 04:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:12.161190 | orchestrator | 2026-01-05 04:06:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:12.162504 | orchestrator | 2026-01-05 04:06:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:12.162689 | orchestrator | 2026-01-05 04:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:15.212585 | orchestrator | 2026-01-05 04:06:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:15.214399 | orchestrator | 2026-01-05 04:06:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:15.214503 | orchestrator | 2026-01-05 04:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:18.269821 | orchestrator | 2026-01-05 04:06:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:18.272641 | orchestrator | 2026-01-05 04:06:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:18.273172 | orchestrator | 2026-01-05 04:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:21.324625 | orchestrator | 2026-01-05 04:06:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:21.326307 | orchestrator | 2026-01-05 04:06:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:21.326343 | orchestrator | 2026-01-05 04:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:24.372709 | orchestrator | 2026-01-05 04:06:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:24.373707 | orchestrator | 2026-01-05 04:06:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:24.373756 | orchestrator | 2026-01-05 04:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:27.419642 | orchestrator | 2026-01-05 04:06:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:27.419724 | orchestrator | 2026-01-05 04:06:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:27.419734 | orchestrator | 2026-01-05 04:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:30.464020 | orchestrator | 2026-01-05 04:06:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:30.466241 | orchestrator | 2026-01-05 04:06:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:30.466306 | orchestrator | 2026-01-05 04:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:33.512804 | orchestrator | 2026-01-05 04:06:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:33.514883 | orchestrator | 2026-01-05 04:06:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:33.515030 | orchestrator | 2026-01-05 04:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:36.565563 | orchestrator | 2026-01-05 04:06:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:36.566490 | orchestrator | 2026-01-05 04:06:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:36.566597 | orchestrator | 2026-01-05 04:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:39.616903 | orchestrator | 2026-01-05 04:06:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:39.618255 | orchestrator | 2026-01-05 04:06:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:39.618360 | orchestrator | 2026-01-05 04:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:42.675123 | orchestrator | 2026-01-05 04:06:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:42.675777 | orchestrator | 2026-01-05 04:06:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:42.675895 | orchestrator | 2026-01-05 04:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:45.726293 | orchestrator | 2026-01-05 04:06:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:45.727928 | orchestrator | 2026-01-05 04:06:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:45.727971 | orchestrator | 2026-01-05 04:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:48.776232 | orchestrator | 2026-01-05 04:06:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:48.777813 | orchestrator | 2026-01-05 04:06:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:48.777919 | orchestrator | 2026-01-05 04:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:51.821247 | orchestrator | 2026-01-05 04:06:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:51.823329 | orchestrator | 2026-01-05 04:06:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:51.823598 | orchestrator | 2026-01-05 04:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:54.877202 | orchestrator | 2026-01-05 04:06:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:54.877986 | orchestrator | 2026-01-05 04:06:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:54.878098 | orchestrator | 2026-01-05 04:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:57.926367 | orchestrator | 2026-01-05 04:06:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:06:57.928284 | orchestrator | 2026-01-05 04:06:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:06:57.928339 | orchestrator | 2026-01-05 04:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:00.979216 | orchestrator | 2026-01-05 04:07:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:00.980515 | orchestrator | 2026-01-05 04:07:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:00.980550 | orchestrator | 2026-01-05 04:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:04.034729 | orchestrator | 2026-01-05 04:07:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:04.038363 | orchestrator | 2026-01-05 04:07:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:04.038426 | orchestrator | 2026-01-05 04:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:07.087762 | orchestrator | 2026-01-05 04:07:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:07.089683 | orchestrator | 2026-01-05 04:07:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:07.089807 | orchestrator | 2026-01-05 04:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:10.130191 | orchestrator | 2026-01-05 04:07:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:10.131984 | orchestrator | 2026-01-05 04:07:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:10.132150 | orchestrator | 2026-01-05 04:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:13.171551 | orchestrator | 2026-01-05 04:07:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:13.171724 | orchestrator | 2026-01-05 04:07:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:13.171738 | orchestrator | 2026-01-05 04:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:16.220075 | orchestrator | 2026-01-05 04:07:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:16.222729 | orchestrator | 2026-01-05 04:07:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:16.222847 | orchestrator | 2026-01-05 04:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:19.275003 | orchestrator | 2026-01-05 04:07:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:19.276987 | orchestrator | 2026-01-05 04:07:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:19.277070 | orchestrator | 2026-01-05 04:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:22.332775 | orchestrator | 2026-01-05 04:07:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:22.335185 | orchestrator | 2026-01-05 04:07:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:22.335232 | orchestrator | 2026-01-05 04:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:25.389043 | orchestrator | 2026-01-05 04:07:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:25.390541 | orchestrator | 2026-01-05 04:07:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:25.390626 | orchestrator | 2026-01-05 04:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:28.433177 | orchestrator | 2026-01-05 04:07:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:28.433936 | orchestrator | 2026-01-05 04:07:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:28.434130 | orchestrator | 2026-01-05 04:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:31.481968 | orchestrator | 2026-01-05 04:07:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:31.483444 | orchestrator | 2026-01-05 04:07:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:31.483520 | orchestrator | 2026-01-05 04:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:34.540093 | orchestrator | 2026-01-05 04:07:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:34.544114 | orchestrator | 2026-01-05 04:07:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:34.544190 | orchestrator | 2026-01-05 04:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:37.603708 | orchestrator | 2026-01-05 04:07:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:37.604808 | orchestrator | 2026-01-05 04:07:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:37.604832 | orchestrator | 2026-01-05 04:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:40.648840 | orchestrator | 2026-01-05 04:07:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:40.650283 | orchestrator | 2026-01-05 04:07:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:40.650317 | orchestrator | 2026-01-05 04:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:43.699155 | orchestrator | 2026-01-05 04:07:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:43.701340 | orchestrator | 2026-01-05 04:07:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:43.701415 | orchestrator | 2026-01-05 04:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:46.749002 | orchestrator | 2026-01-05 04:07:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:46.750310 | orchestrator | 2026-01-05 04:07:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:46.750451 | orchestrator | 2026-01-05 04:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:49.796766 | orchestrator | 2026-01-05 04:07:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:49.798135 | orchestrator | 2026-01-05 04:07:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:49.798201 | orchestrator | 2026-01-05 04:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:52.842992 | orchestrator | 2026-01-05 04:07:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:52.844574 | orchestrator | 2026-01-05 04:07:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:52.844656 | orchestrator | 2026-01-05 04:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:55.890272 | orchestrator | 2026-01-05 04:07:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:55.892023 | orchestrator | 2026-01-05 04:07:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:55.892109 | orchestrator | 2026-01-05 04:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:58.947774 | orchestrator | 2026-01-05 04:07:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:07:58.949802 | orchestrator | 2026-01-05 04:07:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:07:58.949857 | orchestrator | 2026-01-05 04:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:01.997405 | orchestrator | 2026-01-05 04:08:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:01.998694 | orchestrator | 2026-01-05 04:08:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:01.998743 | orchestrator | 2026-01-05 04:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:05.046551 | orchestrator | 2026-01-05 04:08:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:05.047277 | orchestrator | 2026-01-05 04:08:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:05.047314 | orchestrator | 2026-01-05 04:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:08.092171 | orchestrator | 2026-01-05 04:08:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:08.094279 | orchestrator | 2026-01-05 04:08:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:08.094380 | orchestrator | 2026-01-05 04:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:11.141127 | orchestrator | 2026-01-05 04:08:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:11.142196 | orchestrator | 2026-01-05 04:08:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:11.142276 | orchestrator | 2026-01-05 04:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:14.184243 | orchestrator | 2026-01-05 04:08:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:14.186308 | orchestrator | 2026-01-05 04:08:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:14.186358 | orchestrator | 2026-01-05 04:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:17.240858 | orchestrator | 2026-01-05 04:08:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:17.243035 | orchestrator | 2026-01-05 04:08:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:17.243116 | orchestrator | 2026-01-05 04:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:20.290592 | orchestrator | 2026-01-05 04:08:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:20.293067 | orchestrator | 2026-01-05 04:08:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:20.293345 | orchestrator | 2026-01-05 04:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:23.338842 | orchestrator | 2026-01-05 04:08:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:23.340168 | orchestrator | 2026-01-05 04:08:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:23.340344 | orchestrator | 2026-01-05 04:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:26.390446 | orchestrator | 2026-01-05 04:08:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:26.392346 | orchestrator | 2026-01-05 04:08:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:26.392398 | orchestrator | 2026-01-05 04:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:29.437355 | orchestrator | 2026-01-05 04:08:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:29.439059 | orchestrator | 2026-01-05 04:08:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:29.439190 | orchestrator | 2026-01-05 04:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:32.493897 | orchestrator | 2026-01-05 04:08:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:32.495718 | orchestrator | 2026-01-05 04:08:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:32.495938 | orchestrator | 2026-01-05 04:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:35.547606 | orchestrator | 2026-01-05 04:08:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:35.548979 | orchestrator | 2026-01-05 04:08:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:35.549055 | orchestrator | 2026-01-05 04:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:38.593742 | orchestrator | 2026-01-05 04:08:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:38.595006 | orchestrator | 2026-01-05 04:08:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:38.595033 | orchestrator | 2026-01-05 04:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:41.648407 | orchestrator | 2026-01-05 04:08:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:41.650089 | orchestrator | 2026-01-05 04:08:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:41.650148 | orchestrator | 2026-01-05 04:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:44.698750 | orchestrator | 2026-01-05 04:08:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:44.699915 | orchestrator | 2026-01-05 04:08:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:44.699995 | orchestrator | 2026-01-05 04:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:47.741738 | orchestrator | 2026-01-05 04:08:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:47.743522 | orchestrator | 2026-01-05 04:08:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:47.743644 | orchestrator | 2026-01-05 04:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:50.788696 | orchestrator | 2026-01-05 04:08:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:50.789798 | orchestrator | 2026-01-05 04:08:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:50.789841 | orchestrator | 2026-01-05 04:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:53.840565 | orchestrator | 2026-01-05 04:08:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:53.842744 | orchestrator | 2026-01-05 04:08:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:53.842826 | orchestrator | 2026-01-05 04:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:56.900053 | orchestrator | 2026-01-05 04:08:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:56.902875 | orchestrator | 2026-01-05 04:08:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:56.903033 | orchestrator | 2026-01-05 04:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:59.948241 | orchestrator | 2026-01-05 04:08:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:08:59.948852 | orchestrator | 2026-01-05 04:08:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:08:59.948897 | orchestrator | 2026-01-05 04:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:02.996680 | orchestrator | 2026-01-05 04:09:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:03.000608 | orchestrator | 2026-01-05 04:09:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:03.000684 | orchestrator | 2026-01-05 04:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:06.054917 | orchestrator | 2026-01-05 04:09:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:06.058728 | orchestrator | 2026-01-05 04:09:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:06.058783 | orchestrator | 2026-01-05 04:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:09.108723 | orchestrator | 2026-01-05 04:09:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:09.110949 | orchestrator | 2026-01-05 04:09:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:09.111023 | orchestrator | 2026-01-05 04:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:12.159800 | orchestrator | 2026-01-05 04:09:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:12.162387 | orchestrator | 2026-01-05 04:09:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:12.162480 | orchestrator | 2026-01-05 04:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:15.208931 | orchestrator | 2026-01-05 04:09:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:15.210508 | orchestrator | 2026-01-05 04:09:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:15.210574 | orchestrator | 2026-01-05 04:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:18.257603 | orchestrator | 2026-01-05 04:09:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:18.259553 | orchestrator | 2026-01-05 04:09:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:18.259606 | orchestrator | 2026-01-05 04:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:21.309213 | orchestrator | 2026-01-05 04:09:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:21.311569 | orchestrator | 2026-01-05 04:09:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:21.311639 | orchestrator | 2026-01-05 04:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:24.356015 | orchestrator | 2026-01-05 04:09:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:24.357504 | orchestrator | 2026-01-05 04:09:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:24.357592 | orchestrator | 2026-01-05 04:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:27.415131 | orchestrator | 2026-01-05 04:09:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:27.416578 | orchestrator | 2026-01-05 04:09:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:27.416623 | orchestrator | 2026-01-05 04:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:30.463851 | orchestrator | 2026-01-05 04:09:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:30.467283 | orchestrator | 2026-01-05 04:09:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:30.467434 | orchestrator | 2026-01-05 04:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:33.523311 | orchestrator | 2026-01-05 04:09:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:33.525803 | orchestrator | 2026-01-05 04:09:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:33.525864 | orchestrator | 2026-01-05 04:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:36.572249 | orchestrator | 2026-01-05 04:09:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:36.572588 | orchestrator | 2026-01-05 04:09:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:36.572619 | orchestrator | 2026-01-05 04:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:39.616858 | orchestrator | 2026-01-05 04:09:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:39.619396 | orchestrator | 2026-01-05 04:09:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:39.619454 | orchestrator | 2026-01-05 04:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:42.666726 | orchestrator | 2026-01-05 04:09:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:42.668392 | orchestrator | 2026-01-05 04:09:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:42.668542 | orchestrator | 2026-01-05 04:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:45.710405 | orchestrator | 2026-01-05 04:09:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:45.711884 | orchestrator | 2026-01-05 04:09:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:45.712124 | orchestrator | 2026-01-05 04:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:48.759973 | orchestrator | 2026-01-05 04:09:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:48.761392 | orchestrator | 2026-01-05 04:09:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:48.761552 | orchestrator | 2026-01-05 04:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:51.805574 | orchestrator | 2026-01-05 04:09:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:51.806780 | orchestrator | 2026-01-05 04:09:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:51.806979 | orchestrator | 2026-01-05 04:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:54.863282 | orchestrator | 2026-01-05 04:09:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:54.865718 | orchestrator | 2026-01-05 04:09:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:54.865761 | orchestrator | 2026-01-05 04:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:57.921730 | orchestrator | 2026-01-05 04:09:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:09:57.922794 | orchestrator | 2026-01-05 04:09:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:09:57.922845 | orchestrator | 2026-01-05 04:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:00.974152 | orchestrator | 2026-01-05 04:10:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:00.975811 | orchestrator | 2026-01-05 04:10:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:00.976469 | orchestrator | 2026-01-05 04:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:04.037029 | orchestrator | 2026-01-05 04:10:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:04.039641 | orchestrator | 2026-01-05 04:10:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:04.039704 | orchestrator | 2026-01-05 04:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:07.082052 | orchestrator | 2026-01-05 04:10:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:07.083101 | orchestrator | 2026-01-05 04:10:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:07.083118 | orchestrator | 2026-01-05 04:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:10.126427 | orchestrator | 2026-01-05 04:10:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:10.128562 | orchestrator | 2026-01-05 04:10:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:10.128708 | orchestrator | 2026-01-05 04:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:13.185830 | orchestrator | 2026-01-05 04:10:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:13.186469 | orchestrator | 2026-01-05 04:10:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:13.186506 | orchestrator | 2026-01-05 04:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:16.231052 | orchestrator | 2026-01-05 04:10:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:16.232171 | orchestrator | 2026-01-05 04:10:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:16.232248 | orchestrator | 2026-01-05 04:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:19.280841 | orchestrator | 2026-01-05 04:10:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:19.282352 | orchestrator | 2026-01-05 04:10:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:19.282433 | orchestrator | 2026-01-05 04:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:22.332987 | orchestrator | 2026-01-05 04:10:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:22.334483 | orchestrator | 2026-01-05 04:10:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:22.334600 | orchestrator | 2026-01-05 04:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:25.379949 | orchestrator | 2026-01-05 04:10:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:25.380411 | orchestrator | 2026-01-05 04:10:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:25.380433 | orchestrator | 2026-01-05 04:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:28.435838 | orchestrator | 2026-01-05 04:10:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:28.437234 | orchestrator | 2026-01-05 04:10:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:28.437329 | orchestrator | 2026-01-05 04:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:31.486884 | orchestrator | 2026-01-05 04:10:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:31.488216 | orchestrator | 2026-01-05 04:10:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:31.488316 | orchestrator | 2026-01-05 04:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:34.543328 | orchestrator | 2026-01-05 04:10:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:34.544601 | orchestrator | 2026-01-05 04:10:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:34.544638 | orchestrator | 2026-01-05 04:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:37.600382 | orchestrator | 2026-01-05 04:10:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:37.602138 | orchestrator | 2026-01-05 04:10:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:37.602378 | orchestrator | 2026-01-05 04:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:40.648401 | orchestrator | 2026-01-05 04:10:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:40.649086 | orchestrator | 2026-01-05 04:10:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:40.649111 | orchestrator | 2026-01-05 04:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:43.698690 | orchestrator | 2026-01-05 04:10:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:43.700150 | orchestrator | 2026-01-05 04:10:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:43.700292 | orchestrator | 2026-01-05 04:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:46.747534 | orchestrator | 2026-01-05 04:10:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:46.748470 | orchestrator | 2026-01-05 04:10:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:46.748508 | orchestrator | 2026-01-05 04:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:49.795354 | orchestrator | 2026-01-05 04:10:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:49.797519 | orchestrator | 2026-01-05 04:10:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:49.797631 | orchestrator | 2026-01-05 04:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:52.847936 | orchestrator | 2026-01-05 04:10:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:52.849347 | orchestrator | 2026-01-05 04:10:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:52.849483 | orchestrator | 2026-01-05 04:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:55.901848 | orchestrator | 2026-01-05 04:10:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:55.902790 | orchestrator | 2026-01-05 04:10:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:55.902831 | orchestrator | 2026-01-05 04:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:58.961667 | orchestrator | 2026-01-05 04:10:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:10:58.965656 | orchestrator | 2026-01-05 04:10:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:10:58.965789 | orchestrator | 2026-01-05 04:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:02.011552 | orchestrator | 2026-01-05 04:11:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:02.012668 | orchestrator | 2026-01-05 04:11:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:02.012725 | orchestrator | 2026-01-05 04:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:05.063762 | orchestrator | 2026-01-05 04:11:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:05.064885 | orchestrator | 2026-01-05 04:11:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:05.064957 | orchestrator | 2026-01-05 04:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:08.107921 | orchestrator | 2026-01-05 04:11:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:08.108132 | orchestrator | 2026-01-05 04:11:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:08.108155 | orchestrator | 2026-01-05 04:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:11.155973 | orchestrator | 2026-01-05 04:11:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:11.157401 | orchestrator | 2026-01-05 04:11:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:11.157513 | orchestrator | 2026-01-05 04:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:14.204171 | orchestrator | 2026-01-05 04:11:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:14.205604 | orchestrator | 2026-01-05 04:11:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:14.205710 | orchestrator | 2026-01-05 04:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:17.256119 | orchestrator | 2026-01-05 04:11:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:17.258560 | orchestrator | 2026-01-05 04:11:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:17.258685 | orchestrator | 2026-01-05 04:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:20.306861 | orchestrator | 2026-01-05 04:11:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:20.310604 | orchestrator | 2026-01-05 04:11:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:20.310684 | orchestrator | 2026-01-05 04:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:23.355056 | orchestrator | 2026-01-05 04:11:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:23.357106 | orchestrator | 2026-01-05 04:11:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:23.357188 | orchestrator | 2026-01-05 04:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:26.400984 | orchestrator | 2026-01-05 04:11:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:26.403864 | orchestrator | 2026-01-05 04:11:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:26.404088 | orchestrator | 2026-01-05 04:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:29.450975 | orchestrator | 2026-01-05 04:11:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:29.451297 | orchestrator | 2026-01-05 04:11:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:29.451352 | orchestrator | 2026-01-05 04:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:32.499032 | orchestrator | 2026-01-05 04:11:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:32.500634 | orchestrator | 2026-01-05 04:11:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:32.500714 | orchestrator | 2026-01-05 04:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:35.553642 | orchestrator | 2026-01-05 04:11:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:35.624631 | orchestrator | 2026-01-05 04:11:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:35.624709 | orchestrator | 2026-01-05 04:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:38.615792 | orchestrator | 2026-01-05 04:11:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:38.617785 | orchestrator | 2026-01-05 04:11:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:38.617880 | orchestrator | 2026-01-05 04:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:41.671054 | orchestrator | 2026-01-05 04:11:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:41.672968 | orchestrator | 2026-01-05 04:11:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:41.673032 | orchestrator | 2026-01-05 04:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:44.715934 | orchestrator | 2026-01-05 04:11:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:44.718773 | orchestrator | 2026-01-05 04:11:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:44.718825 | orchestrator | 2026-01-05 04:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:47.771707 | orchestrator | 2026-01-05 04:11:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:47.772979 | orchestrator | 2026-01-05 04:11:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:47.773011 | orchestrator | 2026-01-05 04:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:50.828463 | orchestrator | 2026-01-05 04:11:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:50.831409 | orchestrator | 2026-01-05 04:11:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:50.831531 | orchestrator | 2026-01-05 04:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:53.884263 | orchestrator | 2026-01-05 04:11:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:53.886906 | orchestrator | 2026-01-05 04:11:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:53.886966 | orchestrator | 2026-01-05 04:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:56.942744 | orchestrator | 2026-01-05 04:11:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:11:56.944454 | orchestrator | 2026-01-05 04:11:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:11:56.944478 | orchestrator | 2026-01-05 04:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:59.999468 | orchestrator | 2026-01-05 04:11:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:00.000718 | orchestrator | 2026-01-05 04:11:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:00.000781 | orchestrator | 2026-01-05 04:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:03.042429 | orchestrator | 2026-01-05 04:12:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:03.043384 | orchestrator | 2026-01-05 04:12:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:03.043493 | orchestrator | 2026-01-05 04:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:06.088748 | orchestrator | 2026-01-05 04:12:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:06.090586 | orchestrator | 2026-01-05 04:12:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:06.090704 | orchestrator | 2026-01-05 04:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:09.150311 | orchestrator | 2026-01-05 04:12:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:09.150832 | orchestrator | 2026-01-05 04:12:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:09.150868 | orchestrator | 2026-01-05 04:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:12.200541 | orchestrator | 2026-01-05 04:12:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:12.202616 | orchestrator | 2026-01-05 04:12:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:12.202732 | orchestrator | 2026-01-05 04:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:15.256327 | orchestrator | 2026-01-05 04:12:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:15.257885 | orchestrator | 2026-01-05 04:12:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:15.257976 | orchestrator | 2026-01-05 04:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:18.307823 | orchestrator | 2026-01-05 04:12:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:18.309615 | orchestrator | 2026-01-05 04:12:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:18.309689 | orchestrator | 2026-01-05 04:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:21.358571 | orchestrator | 2026-01-05 04:12:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:21.360716 | orchestrator | 2026-01-05 04:12:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:21.360763 | orchestrator | 2026-01-05 04:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:24.410416 | orchestrator | 2026-01-05 04:12:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:24.412420 | orchestrator | 2026-01-05 04:12:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:24.412626 | orchestrator | 2026-01-05 04:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:27.463577 | orchestrator | 2026-01-05 04:12:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:27.464375 | orchestrator | 2026-01-05 04:12:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:27.464399 | orchestrator | 2026-01-05 04:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:30.516315 | orchestrator | 2026-01-05 04:12:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:30.517206 | orchestrator | 2026-01-05 04:12:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:30.517244 | orchestrator | 2026-01-05 04:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:33.564817 | orchestrator | 2026-01-05 04:12:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:33.566419 | orchestrator | 2026-01-05 04:12:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:33.566461 | orchestrator | 2026-01-05 04:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:36.617497 | orchestrator | 2026-01-05 04:12:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:36.619278 | orchestrator | 2026-01-05 04:12:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:36.619335 | orchestrator | 2026-01-05 04:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:39.670373 | orchestrator | 2026-01-05 04:12:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:39.671333 | orchestrator | 2026-01-05 04:12:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:39.671367 | orchestrator | 2026-01-05 04:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:42.727460 | orchestrator | 2026-01-05 04:12:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:42.729225 | orchestrator | 2026-01-05 04:12:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:42.729379 | orchestrator | 2026-01-05 04:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:45.787206 | orchestrator | 2026-01-05 04:12:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:45.789291 | orchestrator | 2026-01-05 04:12:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:45.789364 | orchestrator | 2026-01-05 04:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:48.835785 | orchestrator | 2026-01-05 04:12:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:48.837785 | orchestrator | 2026-01-05 04:12:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:48.837827 | orchestrator | 2026-01-05 04:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:51.882342 | orchestrator | 2026-01-05 04:12:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:51.883962 | orchestrator | 2026-01-05 04:12:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:51.884183 | orchestrator | 2026-01-05 04:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:54.927941 | orchestrator | 2026-01-05 04:12:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:54.930622 | orchestrator | 2026-01-05 04:12:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:54.930694 | orchestrator | 2026-01-05 04:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:57.978622 | orchestrator | 2026-01-05 04:12:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:12:57.980082 | orchestrator | 2026-01-05 04:12:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:12:57.980252 | orchestrator | 2026-01-05 04:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:01.028798 | orchestrator | 2026-01-05 04:13:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:01.030000 | orchestrator | 2026-01-05 04:13:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:01.030140 | orchestrator | 2026-01-05 04:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:04.078966 | orchestrator | 2026-01-05 04:13:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:04.080669 | orchestrator | 2026-01-05 04:13:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:04.080715 | orchestrator | 2026-01-05 04:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:07.130248 | orchestrator | 2026-01-05 04:13:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:07.132267 | orchestrator | 2026-01-05 04:13:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:07.132314 | orchestrator | 2026-01-05 04:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:10.180956 | orchestrator | 2026-01-05 04:13:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:10.182780 | orchestrator | 2026-01-05 04:13:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:10.182925 | orchestrator | 2026-01-05 04:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:13.228511 | orchestrator | 2026-01-05 04:13:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:13.230360 | orchestrator | 2026-01-05 04:13:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:13.230804 | orchestrator | 2026-01-05 04:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:16.277559 | orchestrator | 2026-01-05 04:13:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:16.278439 | orchestrator | 2026-01-05 04:13:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:16.278478 | orchestrator | 2026-01-05 04:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:19.320613 | orchestrator | 2026-01-05 04:13:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:19.322496 | orchestrator | 2026-01-05 04:13:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:19.322574 | orchestrator | 2026-01-05 04:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:22.364792 | orchestrator | 2026-01-05 04:13:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:22.367321 | orchestrator | 2026-01-05 04:13:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:22.367391 | orchestrator | 2026-01-05 04:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:25.412834 | orchestrator | 2026-01-05 04:13:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:25.413715 | orchestrator | 2026-01-05 04:13:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:25.413768 | orchestrator | 2026-01-05 04:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:28.463536 | orchestrator | 2026-01-05 04:13:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:28.465351 | orchestrator | 2026-01-05 04:13:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:28.465435 | orchestrator | 2026-01-05 04:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:31.520354 | orchestrator | 2026-01-05 04:13:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:31.523881 | orchestrator | 2026-01-05 04:13:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:31.523968 | orchestrator | 2026-01-05 04:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:34.577269 | orchestrator | 2026-01-05 04:13:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:34.579587 | orchestrator | 2026-01-05 04:13:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:34.579645 | orchestrator | 2026-01-05 04:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:37.626568 | orchestrator | 2026-01-05 04:13:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:37.631067 | orchestrator | 2026-01-05 04:13:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:37.631155 | orchestrator | 2026-01-05 04:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:40.676959 | orchestrator | 2026-01-05 04:13:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:40.678880 | orchestrator | 2026-01-05 04:13:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:40.678934 | orchestrator | 2026-01-05 04:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:43.724295 | orchestrator | 2026-01-05 04:13:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:43.727168 | orchestrator | 2026-01-05 04:13:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:43.727320 | orchestrator | 2026-01-05 04:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:46.786618 | orchestrator | 2026-01-05 04:13:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:46.788389 | orchestrator | 2026-01-05 04:13:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:46.788439 | orchestrator | 2026-01-05 04:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:49.841865 | orchestrator | 2026-01-05 04:13:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:49.844635 | orchestrator | 2026-01-05 04:13:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:49.844735 | orchestrator | 2026-01-05 04:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:52.898490 | orchestrator | 2026-01-05 04:13:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:52.903234 | orchestrator | 2026-01-05 04:13:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:52.903310 | orchestrator | 2026-01-05 04:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:55.957665 | orchestrator | 2026-01-05 04:13:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:55.960134 | orchestrator | 2026-01-05 04:13:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:55.960216 | orchestrator | 2026-01-05 04:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:59.020945 | orchestrator | 2026-01-05 04:13:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:13:59.022739 | orchestrator | 2026-01-05 04:13:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:13:59.022796 | orchestrator | 2026-01-05 04:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:02.076191 | orchestrator | 2026-01-05 04:14:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:02.079072 | orchestrator | 2026-01-05 04:14:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:02.079162 | orchestrator | 2026-01-05 04:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:05.130548 | orchestrator | 2026-01-05 04:14:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:05.131956 | orchestrator | 2026-01-05 04:14:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:05.132044 | orchestrator | 2026-01-05 04:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:08.175598 | orchestrator | 2026-01-05 04:14:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:08.175819 | orchestrator | 2026-01-05 04:14:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:08.175853 | orchestrator | 2026-01-05 04:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:11.226900 | orchestrator | 2026-01-05 04:14:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:11.231571 | orchestrator | 2026-01-05 04:14:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:11.231686 | orchestrator | 2026-01-05 04:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:14.282238 | orchestrator | 2026-01-05 04:14:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:14.283380 | orchestrator | 2026-01-05 04:14:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:14.283399 | orchestrator | 2026-01-05 04:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:17.332866 | orchestrator | 2026-01-05 04:14:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:17.335306 | orchestrator | 2026-01-05 04:14:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:17.335361 | orchestrator | 2026-01-05 04:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:20.383241 | orchestrator | 2026-01-05 04:14:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:20.385239 | orchestrator | 2026-01-05 04:14:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:20.385310 | orchestrator | 2026-01-05 04:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:23.434583 | orchestrator | 2026-01-05 04:14:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:23.436141 | orchestrator | 2026-01-05 04:14:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:23.436178 | orchestrator | 2026-01-05 04:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:26.486784 | orchestrator | 2026-01-05 04:14:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:26.489868 | orchestrator | 2026-01-05 04:14:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:26.490011 | orchestrator | 2026-01-05 04:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:29.534636 | orchestrator | 2026-01-05 04:14:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:29.534917 | orchestrator | 2026-01-05 04:14:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:29.535144 | orchestrator | 2026-01-05 04:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:32.582174 | orchestrator | 2026-01-05 04:14:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:32.584426 | orchestrator | 2026-01-05 04:14:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:32.584494 | orchestrator | 2026-01-05 04:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:35.638297 | orchestrator | 2026-01-05 04:14:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:35.640313 | orchestrator | 2026-01-05 04:14:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:35.640376 | orchestrator | 2026-01-05 04:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:38.692764 | orchestrator | 2026-01-05 04:14:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:38.696008 | orchestrator | 2026-01-05 04:14:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:38.696182 | orchestrator | 2026-01-05 04:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:41.743478 | orchestrator | 2026-01-05 04:14:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:41.745044 | orchestrator | 2026-01-05 04:14:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:41.745122 | orchestrator | 2026-01-05 04:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:44.794793 | orchestrator | 2026-01-05 04:14:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:44.797019 | orchestrator | 2026-01-05 04:14:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:44.797080 | orchestrator | 2026-01-05 04:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:47.839663 | orchestrator | 2026-01-05 04:14:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:47.841108 | orchestrator | 2026-01-05 04:14:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:47.841126 | orchestrator | 2026-01-05 04:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:50.893676 | orchestrator | 2026-01-05 04:14:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:50.895963 | orchestrator | 2026-01-05 04:14:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:50.896004 | orchestrator | 2026-01-05 04:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:53.947342 | orchestrator | 2026-01-05 04:14:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:53.949279 | orchestrator | 2026-01-05 04:14:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:53.949349 | orchestrator | 2026-01-05 04:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:57.000894 | orchestrator | 2026-01-05 04:14:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:14:57.003756 | orchestrator | 2026-01-05 04:14:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:14:57.004135 | orchestrator | 2026-01-05 04:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:00.056748 | orchestrator | 2026-01-05 04:15:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:00.058313 | orchestrator | 2026-01-05 04:15:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:00.058468 | orchestrator | 2026-01-05 04:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:03.103344 | orchestrator | 2026-01-05 04:15:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:03.104760 | orchestrator | 2026-01-05 04:15:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:03.104826 | orchestrator | 2026-01-05 04:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:06.156636 | orchestrator | 2026-01-05 04:15:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:06.156812 | orchestrator | 2026-01-05 04:15:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:06.156825 | orchestrator | 2026-01-05 04:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:09.201106 | orchestrator | 2026-01-05 04:15:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:09.202498 | orchestrator | 2026-01-05 04:15:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:09.202545 | orchestrator | 2026-01-05 04:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:12.254086 | orchestrator | 2026-01-05 04:15:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:12.257752 | orchestrator | 2026-01-05 04:15:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:12.257790 | orchestrator | 2026-01-05 04:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:15.300801 | orchestrator | 2026-01-05 04:15:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:15.301182 | orchestrator | 2026-01-05 04:15:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:15.301202 | orchestrator | 2026-01-05 04:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:18.352235 | orchestrator | 2026-01-05 04:15:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:18.353458 | orchestrator | 2026-01-05 04:15:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:18.353491 | orchestrator | 2026-01-05 04:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:21.406748 | orchestrator | 2026-01-05 04:15:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:21.408737 | orchestrator | 2026-01-05 04:15:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:21.408775 | orchestrator | 2026-01-05 04:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:24.455805 | orchestrator | 2026-01-05 04:15:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:24.457845 | orchestrator | 2026-01-05 04:15:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:24.458140 | orchestrator | 2026-01-05 04:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:27.504681 | orchestrator | 2026-01-05 04:15:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:27.506275 | orchestrator | 2026-01-05 04:15:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:27.506323 | orchestrator | 2026-01-05 04:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:30.556720 | orchestrator | 2026-01-05 04:15:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:30.560152 | orchestrator | 2026-01-05 04:15:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:30.560235 | orchestrator | 2026-01-05 04:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:33.608494 | orchestrator | 2026-01-05 04:15:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:33.610941 | orchestrator | 2026-01-05 04:15:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:33.611068 | orchestrator | 2026-01-05 04:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:36.659328 | orchestrator | 2026-01-05 04:15:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:36.661346 | orchestrator | 2026-01-05 04:15:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:36.661424 | orchestrator | 2026-01-05 04:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:39.710794 | orchestrator | 2026-01-05 04:15:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:39.712229 | orchestrator | 2026-01-05 04:15:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:39.712379 | orchestrator | 2026-01-05 04:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:42.760031 | orchestrator | 2026-01-05 04:15:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:42.761743 | orchestrator | 2026-01-05 04:15:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:42.762072 | orchestrator | 2026-01-05 04:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:45.805548 | orchestrator | 2026-01-05 04:15:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:45.806923 | orchestrator | 2026-01-05 04:15:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:45.806980 | orchestrator | 2026-01-05 04:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:48.851451 | orchestrator | 2026-01-05 04:15:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:48.853320 | orchestrator | 2026-01-05 04:15:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:48.853374 | orchestrator | 2026-01-05 04:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:51.896297 | orchestrator | 2026-01-05 04:15:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:51.897706 | orchestrator | 2026-01-05 04:15:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:51.897980 | orchestrator | 2026-01-05 04:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:54.950256 | orchestrator | 2026-01-05 04:15:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:54.952839 | orchestrator | 2026-01-05 04:15:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:54.952958 | orchestrator | 2026-01-05 04:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:58.004298 | orchestrator | 2026-01-05 04:15:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:15:58.013738 | orchestrator | 2026-01-05 04:15:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:15:58.013831 | orchestrator | 2026-01-05 04:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:01.062742 | orchestrator | 2026-01-05 04:16:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:01.064620 | orchestrator | 2026-01-05 04:16:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:01.064694 | orchestrator | 2026-01-05 04:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:04.113693 | orchestrator | 2026-01-05 04:16:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:04.115939 | orchestrator | 2026-01-05 04:16:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:04.116039 | orchestrator | 2026-01-05 04:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:07.164411 | orchestrator | 2026-01-05 04:16:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:07.167402 | orchestrator | 2026-01-05 04:16:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:07.167485 | orchestrator | 2026-01-05 04:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:10.216179 | orchestrator | 2026-01-05 04:16:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:10.219004 | orchestrator | 2026-01-05 04:16:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:10.219045 | orchestrator | 2026-01-05 04:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:13.264361 | orchestrator | 2026-01-05 04:16:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:13.268804 | orchestrator | 2026-01-05 04:16:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:13.268999 | orchestrator | 2026-01-05 04:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:16.313299 | orchestrator | 2026-01-05 04:16:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:16.315363 | orchestrator | 2026-01-05 04:16:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:16.315421 | orchestrator | 2026-01-05 04:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:19.366680 | orchestrator | 2026-01-05 04:16:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:19.368909 | orchestrator | 2026-01-05 04:16:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:19.368973 | orchestrator | 2026-01-05 04:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:22.415584 | orchestrator | 2026-01-05 04:16:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:22.418396 | orchestrator | 2026-01-05 04:16:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:22.418468 | orchestrator | 2026-01-05 04:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:25.460380 | orchestrator | 2026-01-05 04:16:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:25.464049 | orchestrator | 2026-01-05 04:16:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:25.464113 | orchestrator | 2026-01-05 04:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:28.515567 | orchestrator | 2026-01-05 04:16:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:28.516252 | orchestrator | 2026-01-05 04:16:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:28.516288 | orchestrator | 2026-01-05 04:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:31.567651 | orchestrator | 2026-01-05 04:16:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:31.570126 | orchestrator | 2026-01-05 04:16:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:31.570248 | orchestrator | 2026-01-05 04:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:34.618945 | orchestrator | 2026-01-05 04:16:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:34.621066 | orchestrator | 2026-01-05 04:16:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:34.621140 | orchestrator | 2026-01-05 04:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:37.670207 | orchestrator | 2026-01-05 04:16:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:37.670549 | orchestrator | 2026-01-05 04:16:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:37.670564 | orchestrator | 2026-01-05 04:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:40.720760 | orchestrator | 2026-01-05 04:16:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:40.723723 | orchestrator | 2026-01-05 04:16:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:40.723983 | orchestrator | 2026-01-05 04:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:43.774748 | orchestrator | 2026-01-05 04:16:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:43.777475 | orchestrator | 2026-01-05 04:16:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:43.777672 | orchestrator | 2026-01-05 04:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:46.827464 | orchestrator | 2026-01-05 04:16:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:46.830338 | orchestrator | 2026-01-05 04:16:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:46.830402 | orchestrator | 2026-01-05 04:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:49.878768 | orchestrator | 2026-01-05 04:16:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:49.882924 | orchestrator | 2026-01-05 04:16:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:49.883037 | orchestrator | 2026-01-05 04:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:52.931468 | orchestrator | 2026-01-05 04:16:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:52.933969 | orchestrator | 2026-01-05 04:16:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:52.934051 | orchestrator | 2026-01-05 04:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:55.985465 | orchestrator | 2026-01-05 04:16:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:55.985815 | orchestrator | 2026-01-05 04:16:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:55.985943 | orchestrator | 2026-01-05 04:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:59.048387 | orchestrator | 2026-01-05 04:16:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:16:59.050591 | orchestrator | 2026-01-05 04:16:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:16:59.050667 | orchestrator | 2026-01-05 04:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:02.090360 | orchestrator | 2026-01-05 04:17:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:02.090821 | orchestrator | 2026-01-05 04:17:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:02.091285 | orchestrator | 2026-01-05 04:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:05.134700 | orchestrator | 2026-01-05 04:17:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:05.136129 | orchestrator | 2026-01-05 04:17:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:05.136281 | orchestrator | 2026-01-05 04:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:08.191034 | orchestrator | 2026-01-05 04:17:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:08.192374 | orchestrator | 2026-01-05 04:17:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:08.192462 | orchestrator | 2026-01-05 04:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:11.244739 | orchestrator | 2026-01-05 04:17:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:11.246628 | orchestrator | 2026-01-05 04:17:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:11.247163 | orchestrator | 2026-01-05 04:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:14.297414 | orchestrator | 2026-01-05 04:17:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:14.297603 | orchestrator | 2026-01-05 04:17:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:14.297617 | orchestrator | 2026-01-05 04:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:17.350420 | orchestrator | 2026-01-05 04:17:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:17.352023 | orchestrator | 2026-01-05 04:17:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:17.352081 | orchestrator | 2026-01-05 04:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:20.403651 | orchestrator | 2026-01-05 04:17:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:20.405365 | orchestrator | 2026-01-05 04:17:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:20.405430 | orchestrator | 2026-01-05 04:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:23.453677 | orchestrator | 2026-01-05 04:17:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:23.455428 | orchestrator | 2026-01-05 04:17:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:23.455513 | orchestrator | 2026-01-05 04:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:26.494227 | orchestrator | 2026-01-05 04:17:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:26.495135 | orchestrator | 2026-01-05 04:17:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:26.495163 | orchestrator | 2026-01-05 04:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:29.534228 | orchestrator | 2026-01-05 04:17:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:29.535317 | orchestrator | 2026-01-05 04:17:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:29.535626 | orchestrator | 2026-01-05 04:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:32.582287 | orchestrator | 2026-01-05 04:17:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:32.584270 | orchestrator | 2026-01-05 04:17:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:32.584416 | orchestrator | 2026-01-05 04:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:35.627747 | orchestrator | 2026-01-05 04:17:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:35.629913 | orchestrator | 2026-01-05 04:17:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:35.629968 | orchestrator | 2026-01-05 04:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:38.682319 | orchestrator | 2026-01-05 04:17:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:38.683727 | orchestrator | 2026-01-05 04:17:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:38.683806 | orchestrator | 2026-01-05 04:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:41.738647 | orchestrator | 2026-01-05 04:17:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:41.740719 | orchestrator | 2026-01-05 04:17:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:41.740803 | orchestrator | 2026-01-05 04:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:44.787599 | orchestrator | 2026-01-05 04:17:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:44.789018 | orchestrator | 2026-01-05 04:17:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:44.789065 | orchestrator | 2026-01-05 04:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:47.833519 | orchestrator | 2026-01-05 04:17:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:47.835393 | orchestrator | 2026-01-05 04:17:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:47.835485 | orchestrator | 2026-01-05 04:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:50.884279 | orchestrator | 2026-01-05 04:17:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:50.885593 | orchestrator | 2026-01-05 04:17:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:50.885736 | orchestrator | 2026-01-05 04:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:53.933908 | orchestrator | 2026-01-05 04:17:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:53.934858 | orchestrator | 2026-01-05 04:17:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:53.934896 | orchestrator | 2026-01-05 04:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:56.985528 | orchestrator | 2026-01-05 04:17:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:17:56.986900 | orchestrator | 2026-01-05 04:17:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:17:56.986963 | orchestrator | 2026-01-05 04:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:00.044336 | orchestrator | 2026-01-05 04:18:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:00.047156 | orchestrator | 2026-01-05 04:18:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:00.047298 | orchestrator | 2026-01-05 04:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:03.101270 | orchestrator | 2026-01-05 04:18:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:03.104209 | orchestrator | 2026-01-05 04:18:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:03.104275 | orchestrator | 2026-01-05 04:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:06.150005 | orchestrator | 2026-01-05 04:18:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:06.152557 | orchestrator | 2026-01-05 04:18:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:06.152628 | orchestrator | 2026-01-05 04:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:09.205111 | orchestrator | 2026-01-05 04:18:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:09.206750 | orchestrator | 2026-01-05 04:18:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:09.206887 | orchestrator | 2026-01-05 04:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:12.253233 | orchestrator | 2026-01-05 04:18:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:12.254008 | orchestrator | 2026-01-05 04:18:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:12.254088 | orchestrator | 2026-01-05 04:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:15.303636 | orchestrator | 2026-01-05 04:18:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:15.305142 | orchestrator | 2026-01-05 04:18:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:15.305181 | orchestrator | 2026-01-05 04:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:18.357745 | orchestrator | 2026-01-05 04:18:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:18.360370 | orchestrator | 2026-01-05 04:18:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:18.447883 | orchestrator | 2026-01-05 04:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:21.414236 | orchestrator | 2026-01-05 04:18:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:21.416108 | orchestrator | 2026-01-05 04:18:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:21.416184 | orchestrator | 2026-01-05 04:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:24.461433 | orchestrator | 2026-01-05 04:18:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:24.463326 | orchestrator | 2026-01-05 04:18:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:24.463423 | orchestrator | 2026-01-05 04:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:27.514273 | orchestrator | 2026-01-05 04:18:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:27.517746 | orchestrator | 2026-01-05 04:18:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:27.517860 | orchestrator | 2026-01-05 04:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:30.570718 | orchestrator | 2026-01-05 04:18:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:30.571962 | orchestrator | 2026-01-05 04:18:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:30.571995 | orchestrator | 2026-01-05 04:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:33.621046 | orchestrator | 2026-01-05 04:18:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:33.622498 | orchestrator | 2026-01-05 04:18:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:33.622530 | orchestrator | 2026-01-05 04:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:36.673166 | orchestrator | 2026-01-05 04:18:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:36.675344 | orchestrator | 2026-01-05 04:18:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:36.675410 | orchestrator | 2026-01-05 04:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:39.718327 | orchestrator | 2026-01-05 04:18:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:39.720875 | orchestrator | 2026-01-05 04:18:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:39.720950 | orchestrator | 2026-01-05 04:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:42.764772 | orchestrator | 2026-01-05 04:18:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:42.766372 | orchestrator | 2026-01-05 04:18:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:42.766859 | orchestrator | 2026-01-05 04:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:45.811294 | orchestrator | 2026-01-05 04:18:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:45.812547 | orchestrator | 2026-01-05 04:18:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:45.812576 | orchestrator | 2026-01-05 04:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:48.857379 | orchestrator | 2026-01-05 04:18:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:48.860047 | orchestrator | 2026-01-05 04:18:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:48.860136 | orchestrator | 2026-01-05 04:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:51.905697 | orchestrator | 2026-01-05 04:18:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:51.907966 | orchestrator | 2026-01-05 04:18:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:51.908035 | orchestrator | 2026-01-05 04:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:54.963505 | orchestrator | 2026-01-05 04:18:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:54.964903 | orchestrator | 2026-01-05 04:18:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:54.964992 | orchestrator | 2026-01-05 04:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:58.026444 | orchestrator | 2026-01-05 04:18:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:18:58.029574 | orchestrator | 2026-01-05 04:18:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:18:58.029616 | orchestrator | 2026-01-05 04:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:01.079033 | orchestrator | 2026-01-05 04:19:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:01.080501 | orchestrator | 2026-01-05 04:19:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:01.080580 | orchestrator | 2026-01-05 04:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:04.127459 | orchestrator | 2026-01-05 04:19:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:04.130196 | orchestrator | 2026-01-05 04:19:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:04.130256 | orchestrator | 2026-01-05 04:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:07.178946 | orchestrator | 2026-01-05 04:19:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:07.181159 | orchestrator | 2026-01-05 04:19:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:07.181205 | orchestrator | 2026-01-05 04:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:10.233327 | orchestrator | 2026-01-05 04:19:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:10.235153 | orchestrator | 2026-01-05 04:19:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:10.235303 | orchestrator | 2026-01-05 04:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:13.292037 | orchestrator | 2026-01-05 04:19:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:13.293614 | orchestrator | 2026-01-05 04:19:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:13.293709 | orchestrator | 2026-01-05 04:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:16.342419 | orchestrator | 2026-01-05 04:19:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:16.343678 | orchestrator | 2026-01-05 04:19:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:16.343725 | orchestrator | 2026-01-05 04:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:19.398771 | orchestrator | 2026-01-05 04:19:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:19.401057 | orchestrator | 2026-01-05 04:19:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:19.401106 | orchestrator | 2026-01-05 04:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:22.455637 | orchestrator | 2026-01-05 04:19:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:22.456556 | orchestrator | 2026-01-05 04:19:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:22.456650 | orchestrator | 2026-01-05 04:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:25.510542 | orchestrator | 2026-01-05 04:19:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:25.512506 | orchestrator | 2026-01-05 04:19:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:25.512552 | orchestrator | 2026-01-05 04:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:28.565697 | orchestrator | 2026-01-05 04:19:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:28.567109 | orchestrator | 2026-01-05 04:19:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:28.567151 | orchestrator | 2026-01-05 04:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:31.618576 | orchestrator | 2026-01-05 04:19:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:31.621528 | orchestrator | 2026-01-05 04:19:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:31.621591 | orchestrator | 2026-01-05 04:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:34.675643 | orchestrator | 2026-01-05 04:19:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:34.677029 | orchestrator | 2026-01-05 04:19:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:34.677111 | orchestrator | 2026-01-05 04:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:37.726687 | orchestrator | 2026-01-05 04:19:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:37.729238 | orchestrator | 2026-01-05 04:19:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:37.729381 | orchestrator | 2026-01-05 04:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:40.779465 | orchestrator | 2026-01-05 04:19:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:40.781355 | orchestrator | 2026-01-05 04:19:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:40.781465 | orchestrator | 2026-01-05 04:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:43.835076 | orchestrator | 2026-01-05 04:19:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:43.836756 | orchestrator | 2026-01-05 04:19:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:43.836880 | orchestrator | 2026-01-05 04:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:46.879464 | orchestrator | 2026-01-05 04:19:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:46.882468 | orchestrator | 2026-01-05 04:19:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:46.882660 | orchestrator | 2026-01-05 04:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:49.932116 | orchestrator | 2026-01-05 04:19:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:49.937029 | orchestrator | 2026-01-05 04:19:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:49.937115 | orchestrator | 2026-01-05 04:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:52.988982 | orchestrator | 2026-01-05 04:19:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:52.990004 | orchestrator | 2026-01-05 04:19:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:52.990150 | orchestrator | 2026-01-05 04:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:56.037562 | orchestrator | 2026-01-05 04:19:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:56.039739 | orchestrator | 2026-01-05 04:19:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:56.039982 | orchestrator | 2026-01-05 04:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:59.090348 | orchestrator | 2026-01-05 04:19:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:19:59.092015 | orchestrator | 2026-01-05 04:19:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:19:59.092096 | orchestrator | 2026-01-05 04:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:02.139110 | orchestrator | 2026-01-05 04:20:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:02.140497 | orchestrator | 2026-01-05 04:20:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:02.140528 | orchestrator | 2026-01-05 04:20:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:05.191598 | orchestrator | 2026-01-05 04:20:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:05.193148 | orchestrator | 2026-01-05 04:20:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:05.193275 | orchestrator | 2026-01-05 04:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:08.238255 | orchestrator | 2026-01-05 04:20:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:08.240966 | orchestrator | 2026-01-05 04:20:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:08.241029 | orchestrator | 2026-01-05 04:20:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:11.289758 | orchestrator | 2026-01-05 04:20:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:11.291239 | orchestrator | 2026-01-05 04:20:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:11.291350 | orchestrator | 2026-01-05 04:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:14.334954 | orchestrator | 2026-01-05 04:20:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:14.336464 | orchestrator | 2026-01-05 04:20:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:14.336567 | orchestrator | 2026-01-05 04:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:17.387810 | orchestrator | 2026-01-05 04:20:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:17.389745 | orchestrator | 2026-01-05 04:20:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:17.389852 | orchestrator | 2026-01-05 04:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:20.435868 | orchestrator | 2026-01-05 04:20:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:20.437398 | orchestrator | 2026-01-05 04:20:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:20.437457 | orchestrator | 2026-01-05 04:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:23.486356 | orchestrator | 2026-01-05 04:20:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:23.488368 | orchestrator | 2026-01-05 04:20:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:23.488428 | orchestrator | 2026-01-05 04:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:26.535322 | orchestrator | 2026-01-05 04:20:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:26.536765 | orchestrator | 2026-01-05 04:20:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:26.536888 | orchestrator | 2026-01-05 04:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:29.590494 | orchestrator | 2026-01-05 04:20:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:29.591949 | orchestrator | 2026-01-05 04:20:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:29.592513 | orchestrator | 2026-01-05 04:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:32.640374 | orchestrator | 2026-01-05 04:20:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:32.641614 | orchestrator | 2026-01-05 04:20:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:32.641672 | orchestrator | 2026-01-05 04:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:35.689750 | orchestrator | 2026-01-05 04:20:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:35.692059 | orchestrator | 2026-01-05 04:20:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:35.692118 | orchestrator | 2026-01-05 04:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:38.735400 | orchestrator | 2026-01-05 04:20:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:38.737225 | orchestrator | 2026-01-05 04:20:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:38.737275 | orchestrator | 2026-01-05 04:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:41.785504 | orchestrator | 2026-01-05 04:20:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:41.787714 | orchestrator | 2026-01-05 04:20:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:41.787867 | orchestrator | 2026-01-05 04:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:44.840642 | orchestrator | 2026-01-05 04:20:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:44.842198 | orchestrator | 2026-01-05 04:20:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:44.842470 | orchestrator | 2026-01-05 04:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:47.893223 | orchestrator | 2026-01-05 04:20:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:47.895339 | orchestrator | 2026-01-05 04:20:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:47.895414 | orchestrator | 2026-01-05 04:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:50.945067 | orchestrator | 2026-01-05 04:20:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:50.946494 | orchestrator | 2026-01-05 04:20:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:50.946546 | orchestrator | 2026-01-05 04:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:53.996932 | orchestrator | 2026-01-05 04:20:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:53.998962 | orchestrator | 2026-01-05 04:20:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:53.999030 | orchestrator | 2026-01-05 04:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:57.040579 | orchestrator | 2026-01-05 04:20:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:20:57.042956 | orchestrator | 2026-01-05 04:20:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:20:57.043059 | orchestrator | 2026-01-05 04:20:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:00.096588 | orchestrator | 2026-01-05 04:21:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:00.098728 | orchestrator | 2026-01-05 04:21:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:00.098852 | orchestrator | 2026-01-05 04:21:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:03.149259 | orchestrator | 2026-01-05 04:21:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:03.150368 | orchestrator | 2026-01-05 04:21:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:03.150459 | orchestrator | 2026-01-05 04:21:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:06.196131 | orchestrator | 2026-01-05 04:21:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:06.197414 | orchestrator | 2026-01-05 04:21:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:06.197506 | orchestrator | 2026-01-05 04:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:09.248589 | orchestrator | 2026-01-05 04:21:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:09.250427 | orchestrator | 2026-01-05 04:21:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:09.250497 | orchestrator | 2026-01-05 04:21:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:12.300903 | orchestrator | 2026-01-05 04:21:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:12.301467 | orchestrator | 2026-01-05 04:21:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:12.301646 | orchestrator | 2026-01-05 04:21:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:15.357276 | orchestrator | 2026-01-05 04:21:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:15.359154 | orchestrator | 2026-01-05 04:21:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:15.359208 | orchestrator | 2026-01-05 04:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:18.406981 | orchestrator | 2026-01-05 04:21:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:18.409676 | orchestrator | 2026-01-05 04:21:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:18.409770 | orchestrator | 2026-01-05 04:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:21.464458 | orchestrator | 2026-01-05 04:21:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:21.466001 | orchestrator | 2026-01-05 04:21:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:21.466115 | orchestrator | 2026-01-05 04:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:24.515731 | orchestrator | 2026-01-05 04:21:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:24.517610 | orchestrator | 2026-01-05 04:21:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:24.517702 | orchestrator | 2026-01-05 04:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:27.571136 | orchestrator | 2026-01-05 04:21:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:27.573371 | orchestrator | 2026-01-05 04:21:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:27.573449 | orchestrator | 2026-01-05 04:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:30.627105 | orchestrator | 2026-01-05 04:21:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:30.630995 | orchestrator | 2026-01-05 04:21:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:30.631154 | orchestrator | 2026-01-05 04:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:33.679090 | orchestrator | 2026-01-05 04:21:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:33.683365 | orchestrator | 2026-01-05 04:21:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:33.683424 | orchestrator | 2026-01-05 04:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:36.737295 | orchestrator | 2026-01-05 04:21:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:36.740638 | orchestrator | 2026-01-05 04:21:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:36.740687 | orchestrator | 2026-01-05 04:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:39.789273 | orchestrator | 2026-01-05 04:21:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:39.790482 | orchestrator | 2026-01-05 04:21:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:39.790626 | orchestrator | 2026-01-05 04:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:42.841086 | orchestrator | 2026-01-05 04:21:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:42.842902 | orchestrator | 2026-01-05 04:21:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:42.843104 | orchestrator | 2026-01-05 04:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:45.894920 | orchestrator | 2026-01-05 04:21:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:45.897955 | orchestrator | 2026-01-05 04:21:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:45.898089 | orchestrator | 2026-01-05 04:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:48.954207 | orchestrator | 2026-01-05 04:21:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:48.955689 | orchestrator | 2026-01-05 04:21:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:48.955751 | orchestrator | 2026-01-05 04:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:52.012719 | orchestrator | 2026-01-05 04:21:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:52.014488 | orchestrator | 2026-01-05 04:21:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:52.014561 | orchestrator | 2026-01-05 04:21:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:55.062124 | orchestrator | 2026-01-05 04:21:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:55.065985 | orchestrator | 2026-01-05 04:21:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:55.066080 | orchestrator | 2026-01-05 04:21:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:58.117545 | orchestrator | 2026-01-05 04:21:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:21:58.120568 | orchestrator | 2026-01-05 04:21:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:21:58.120763 | orchestrator | 2026-01-05 04:21:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:01.169678 | orchestrator | 2026-01-05 04:22:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:01.171060 | orchestrator | 2026-01-05 04:22:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:01.171278 | orchestrator | 2026-01-05 04:22:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:04.214367 | orchestrator | 2026-01-05 04:22:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:04.216015 | orchestrator | 2026-01-05 04:22:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:04.216222 | orchestrator | 2026-01-05 04:22:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:07.264849 | orchestrator | 2026-01-05 04:22:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:07.266125 | orchestrator | 2026-01-05 04:22:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:07.266367 | orchestrator | 2026-01-05 04:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:10.309531 | orchestrator | 2026-01-05 04:22:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:10.310003 | orchestrator | 2026-01-05 04:22:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:10.310229 | orchestrator | 2026-01-05 04:22:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:13.353347 | orchestrator | 2026-01-05 04:22:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:13.353946 | orchestrator | 2026-01-05 04:22:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:13.353980 | orchestrator | 2026-01-05 04:22:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:16.407262 | orchestrator | 2026-01-05 04:22:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:16.407780 | orchestrator | 2026-01-05 04:22:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:16.407873 | orchestrator | 2026-01-05 04:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:19.458876 | orchestrator | 2026-01-05 04:22:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:19.460577 | orchestrator | 2026-01-05 04:22:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:19.460627 | orchestrator | 2026-01-05 04:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:22.509304 | orchestrator | 2026-01-05 04:22:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:22.511393 | orchestrator | 2026-01-05 04:22:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:22.511607 | orchestrator | 2026-01-05 04:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:25.558426 | orchestrator | 2026-01-05 04:22:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:25.560490 | orchestrator | 2026-01-05 04:22:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:25.560643 | orchestrator | 2026-01-05 04:22:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:28.611263 | orchestrator | 2026-01-05 04:22:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:28.612306 | orchestrator | 2026-01-05 04:22:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:28.612727 | orchestrator | 2026-01-05 04:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:31.662991 | orchestrator | 2026-01-05 04:22:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:31.663785 | orchestrator | 2026-01-05 04:22:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:31.663849 | orchestrator | 2026-01-05 04:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:34.716558 | orchestrator | 2026-01-05 04:22:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:34.717736 | orchestrator | 2026-01-05 04:22:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:34.717910 | orchestrator | 2026-01-05 04:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:37.772385 | orchestrator | 2026-01-05 04:22:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:37.774389 | orchestrator | 2026-01-05 04:22:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:37.774458 | orchestrator | 2026-01-05 04:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:40.820215 | orchestrator | 2026-01-05 04:22:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:40.821273 | orchestrator | 2026-01-05 04:22:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:40.821318 | orchestrator | 2026-01-05 04:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:43.863599 | orchestrator | 2026-01-05 04:22:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:43.865358 | orchestrator | 2026-01-05 04:22:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:43.865424 | orchestrator | 2026-01-05 04:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:46.915885 | orchestrator | 2026-01-05 04:22:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:46.917770 | orchestrator | 2026-01-05 04:22:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:46.917912 | orchestrator | 2026-01-05 04:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:49.973352 | orchestrator | 2026-01-05 04:22:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:49.975412 | orchestrator | 2026-01-05 04:22:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:49.975783 | orchestrator | 2026-01-05 04:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:53.025210 | orchestrator | 2026-01-05 04:22:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:53.025312 | orchestrator | 2026-01-05 04:22:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:53.025327 | orchestrator | 2026-01-05 04:22:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:56.073183 | orchestrator | 2026-01-05 04:22:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:56.075081 | orchestrator | 2026-01-05 04:22:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:56.075133 | orchestrator | 2026-01-05 04:22:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:59.123521 | orchestrator | 2026-01-05 04:22:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:22:59.125840 | orchestrator | 2026-01-05 04:22:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:22:59.125883 | orchestrator | 2026-01-05 04:22:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:02.184069 | orchestrator | 2026-01-05 04:23:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:02.185550 | orchestrator | 2026-01-05 04:23:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:02.185581 | orchestrator | 2026-01-05 04:23:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:05.231694 | orchestrator | 2026-01-05 04:23:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:05.233267 | orchestrator | 2026-01-05 04:23:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:05.233464 | orchestrator | 2026-01-05 04:23:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:08.280624 | orchestrator | 2026-01-05 04:23:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:08.282322 | orchestrator | 2026-01-05 04:23:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:08.282440 | orchestrator | 2026-01-05 04:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:11.325755 | orchestrator | 2026-01-05 04:23:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:11.326554 | orchestrator | 2026-01-05 04:23:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:11.326650 | orchestrator | 2026-01-05 04:23:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:14.376677 | orchestrator | 2026-01-05 04:23:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:14.380363 | orchestrator | 2026-01-05 04:23:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:14.380439 | orchestrator | 2026-01-05 04:23:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:17.440759 | orchestrator | 2026-01-05 04:23:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:17.443528 | orchestrator | 2026-01-05 04:23:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:17.443964 | orchestrator | 2026-01-05 04:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:20.495386 | orchestrator | 2026-01-05 04:23:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:20.495607 | orchestrator | 2026-01-05 04:23:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:20.495747 | orchestrator | 2026-01-05 04:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:23.541790 | orchestrator | 2026-01-05 04:23:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:23.543175 | orchestrator | 2026-01-05 04:23:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:23.543275 | orchestrator | 2026-01-05 04:23:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:26.589087 | orchestrator | 2026-01-05 04:23:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:26.591127 | orchestrator | 2026-01-05 04:23:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:26.591185 | orchestrator | 2026-01-05 04:23:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:29.641001 | orchestrator | 2026-01-05 04:23:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:29.642093 | orchestrator | 2026-01-05 04:23:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:29.642538 | orchestrator | 2026-01-05 04:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:32.691399 | orchestrator | 2026-01-05 04:23:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:32.692636 | orchestrator | 2026-01-05 04:23:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:32.692707 | orchestrator | 2026-01-05 04:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:35.744293 | orchestrator | 2026-01-05 04:23:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:35.746123 | orchestrator | 2026-01-05 04:23:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:35.746303 | orchestrator | 2026-01-05 04:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:38.799511 | orchestrator | 2026-01-05 04:23:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:38.802439 | orchestrator | 2026-01-05 04:23:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:38.802491 | orchestrator | 2026-01-05 04:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:41.847735 | orchestrator | 2026-01-05 04:23:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:41.849349 | orchestrator | 2026-01-05 04:23:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:41.849413 | orchestrator | 2026-01-05 04:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:44.893248 | orchestrator | 2026-01-05 04:23:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:44.895720 | orchestrator | 2026-01-05 04:23:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:44.895778 | orchestrator | 2026-01-05 04:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:47.944742 | orchestrator | 2026-01-05 04:23:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:47.946474 | orchestrator | 2026-01-05 04:23:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:47.946532 | orchestrator | 2026-01-05 04:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:50.998699 | orchestrator | 2026-01-05 04:23:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:50.999718 | orchestrator | 2026-01-05 04:23:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:50.999905 | orchestrator | 2026-01-05 04:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:54.052913 | orchestrator | 2026-01-05 04:23:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:54.054909 | orchestrator | 2026-01-05 04:23:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:54.054974 | orchestrator | 2026-01-05 04:23:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:57.099226 | orchestrator | 2026-01-05 04:23:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:23:57.101032 | orchestrator | 2026-01-05 04:23:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:23:57.101071 | orchestrator | 2026-01-05 04:23:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:00.146671 | orchestrator | 2026-01-05 04:24:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:00.147241 | orchestrator | 2026-01-05 04:24:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:00.147280 | orchestrator | 2026-01-05 04:24:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:03.197468 | orchestrator | 2026-01-05 04:24:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:03.199010 | orchestrator | 2026-01-05 04:24:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:03.199090 | orchestrator | 2026-01-05 04:24:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:06.252844 | orchestrator | 2026-01-05 04:24:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:06.254967 | orchestrator | 2026-01-05 04:24:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:06.255018 | orchestrator | 2026-01-05 04:24:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:09.306483 | orchestrator | 2026-01-05 04:24:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:09.307925 | orchestrator | 2026-01-05 04:24:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:09.307969 | orchestrator | 2026-01-05 04:24:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:12.360012 | orchestrator | 2026-01-05 04:24:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:12.362113 | orchestrator | 2026-01-05 04:24:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:12.362219 | orchestrator | 2026-01-05 04:24:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:15.421312 | orchestrator | 2026-01-05 04:24:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:15.423477 | orchestrator | 2026-01-05 04:24:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:15.423596 | orchestrator | 2026-01-05 04:24:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:18.476495 | orchestrator | 2026-01-05 04:24:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:18.478940 | orchestrator | 2026-01-05 04:24:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:18.479031 | orchestrator | 2026-01-05 04:24:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:21.521667 | orchestrator | 2026-01-05 04:24:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:21.522188 | orchestrator | 2026-01-05 04:24:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:21.522301 | orchestrator | 2026-01-05 04:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:24.571998 | orchestrator | 2026-01-05 04:24:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:24.573974 | orchestrator | 2026-01-05 04:24:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:24.574056 | orchestrator | 2026-01-05 04:24:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:27.621079 | orchestrator | 2026-01-05 04:24:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:27.623909 | orchestrator | 2026-01-05 04:24:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:27.623953 | orchestrator | 2026-01-05 04:24:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:30.669323 | orchestrator | 2026-01-05 04:24:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:30.671483 | orchestrator | 2026-01-05 04:24:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:30.671780 | orchestrator | 2026-01-05 04:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:33.709715 | orchestrator | 2026-01-05 04:24:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:33.711404 | orchestrator | 2026-01-05 04:24:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:33.711462 | orchestrator | 2026-01-05 04:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:36.763036 | orchestrator | 2026-01-05 04:24:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:36.764052 | orchestrator | 2026-01-05 04:24:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:36.764102 | orchestrator | 2026-01-05 04:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:39.809301 | orchestrator | 2026-01-05 04:24:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:39.811965 | orchestrator | 2026-01-05 04:24:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:39.812120 | orchestrator | 2026-01-05 04:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:42.857452 | orchestrator | 2026-01-05 04:24:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:42.860843 | orchestrator | 2026-01-05 04:24:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:42.861368 | orchestrator | 2026-01-05 04:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:45.910195 | orchestrator | 2026-01-05 04:24:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:45.912374 | orchestrator | 2026-01-05 04:24:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:45.912461 | orchestrator | 2026-01-05 04:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:48.962114 | orchestrator | 2026-01-05 04:24:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:48.963459 | orchestrator | 2026-01-05 04:24:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:48.963532 | orchestrator | 2026-01-05 04:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:52.019200 | orchestrator | 2026-01-05 04:24:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:52.020544 | orchestrator | 2026-01-05 04:24:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:52.020601 | orchestrator | 2026-01-05 04:24:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:55.069596 | orchestrator | 2026-01-05 04:24:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:55.071782 | orchestrator | 2026-01-05 04:24:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:55.071857 | orchestrator | 2026-01-05 04:24:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:58.113459 | orchestrator | 2026-01-05 04:24:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:24:58.115486 | orchestrator | 2026-01-05 04:24:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:24:58.115581 | orchestrator | 2026-01-05 04:24:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:01.165148 | orchestrator | 2026-01-05 04:25:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:01.168969 | orchestrator | 2026-01-05 04:25:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:01.169187 | orchestrator | 2026-01-05 04:25:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:04.209629 | orchestrator | 2026-01-05 04:25:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:04.211561 | orchestrator | 2026-01-05 04:25:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:04.211607 | orchestrator | 2026-01-05 04:25:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:07.268083 | orchestrator | 2026-01-05 04:25:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:07.272465 | orchestrator | 2026-01-05 04:25:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:07.273134 | orchestrator | 2026-01-05 04:25:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:10.327318 | orchestrator | 2026-01-05 04:25:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:10.329857 | orchestrator | 2026-01-05 04:25:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:10.329990 | orchestrator | 2026-01-05 04:25:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:13.383680 | orchestrator | 2026-01-05 04:25:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:13.384368 | orchestrator | 2026-01-05 04:25:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:13.384516 | orchestrator | 2026-01-05 04:25:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:16.431767 | orchestrator | 2026-01-05 04:25:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:16.432573 | orchestrator | 2026-01-05 04:25:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:16.432890 | orchestrator | 2026-01-05 04:25:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:19.484393 | orchestrator | 2026-01-05 04:25:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:19.486532 | orchestrator | 2026-01-05 04:25:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:19.486798 | orchestrator | 2026-01-05 04:25:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:22.537501 | orchestrator | 2026-01-05 04:25:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:22.538752 | orchestrator | 2026-01-05 04:25:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:22.538905 | orchestrator | 2026-01-05 04:25:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:25.585063 | orchestrator | 2026-01-05 04:25:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:25.586903 | orchestrator | 2026-01-05 04:25:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:25.586990 | orchestrator | 2026-01-05 04:25:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:28.634541 | orchestrator | 2026-01-05 04:25:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:28.636651 | orchestrator | 2026-01-05 04:25:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:28.636699 | orchestrator | 2026-01-05 04:25:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:31.683901 | orchestrator | 2026-01-05 04:25:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:31.685294 | orchestrator | 2026-01-05 04:25:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:31.685353 | orchestrator | 2026-01-05 04:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:34.737625 | orchestrator | 2026-01-05 04:25:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:34.739934 | orchestrator | 2026-01-05 04:25:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:34.740049 | orchestrator | 2026-01-05 04:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:37.793888 | orchestrator | 2026-01-05 04:25:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:37.796397 | orchestrator | 2026-01-05 04:25:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:37.796462 | orchestrator | 2026-01-05 04:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:40.841967 | orchestrator | 2026-01-05 04:25:40 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:40.843236 | orchestrator | 2026-01-05 04:25:40 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:40.844389 | orchestrator | 2026-01-05 04:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:43.898076 | orchestrator | 2026-01-05 04:25:43 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:43.898431 | orchestrator | 2026-01-05 04:25:43 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:43.898604 | orchestrator | 2026-01-05 04:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:46.951094 | orchestrator | 2026-01-05 04:25:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:46.954401 | orchestrator | 2026-01-05 04:25:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:46.954502 | orchestrator | 2026-01-05 04:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:50.006999 | orchestrator | 2026-01-05 04:25:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:50.008415 | orchestrator | 2026-01-05 04:25:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:50.008497 | orchestrator | 2026-01-05 04:25:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:53.062639 | orchestrator | 2026-01-05 04:25:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:53.066196 | orchestrator | 2026-01-05 04:25:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:53.066283 | orchestrator | 2026-01-05 04:25:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:56.120595 | orchestrator | 2026-01-05 04:25:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:56.123625 | orchestrator | 2026-01-05 04:25:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:56.123749 | orchestrator | 2026-01-05 04:25:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:59.178955 | orchestrator | 2026-01-05 04:25:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:25:59.181621 | orchestrator | 2026-01-05 04:25:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:25:59.181656 | orchestrator | 2026-01-05 04:25:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:02.229741 | orchestrator | 2026-01-05 04:26:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:02.231404 | orchestrator | 2026-01-05 04:26:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:02.231579 | orchestrator | 2026-01-05 04:26:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:05.280789 | orchestrator | 2026-01-05 04:26:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:05.283076 | orchestrator | 2026-01-05 04:26:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:05.283139 | orchestrator | 2026-01-05 04:26:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:08.335168 | orchestrator | 2026-01-05 04:26:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:08.336220 | orchestrator | 2026-01-05 04:26:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:08.336269 | orchestrator | 2026-01-05 04:26:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:11.384564 | orchestrator | 2026-01-05 04:26:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:11.387012 | orchestrator | 2026-01-05 04:26:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:11.387146 | orchestrator | 2026-01-05 04:26:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:14.440235 | orchestrator | 2026-01-05 04:26:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:14.441823 | orchestrator | 2026-01-05 04:26:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:14.441929 | orchestrator | 2026-01-05 04:26:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:17.494495 | orchestrator | 2026-01-05 04:26:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:17.499003 | orchestrator | 2026-01-05 04:26:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:17.499215 | orchestrator | 2026-01-05 04:26:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:20.545620 | orchestrator | 2026-01-05 04:26:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:20.546119 | orchestrator | 2026-01-05 04:26:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:20.546175 | orchestrator | 2026-01-05 04:26:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:23.606999 | orchestrator | 2026-01-05 04:26:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:23.609164 | orchestrator | 2026-01-05 04:26:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:23.609246 | orchestrator | 2026-01-05 04:26:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:26.657457 | orchestrator | 2026-01-05 04:26:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:26.659567 | orchestrator | 2026-01-05 04:26:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:26.659650 | orchestrator | 2026-01-05 04:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:29.717376 | orchestrator | 2026-01-05 04:26:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:29.719099 | orchestrator | 2026-01-05 04:26:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:29.719328 | orchestrator | 2026-01-05 04:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:32.776525 | orchestrator | 2026-01-05 04:26:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:32.778695 | orchestrator | 2026-01-05 04:26:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:32.778835 | orchestrator | 2026-01-05 04:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:35.834238 | orchestrator | 2026-01-05 04:26:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:35.836127 | orchestrator | 2026-01-05 04:26:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:35.836201 | orchestrator | 2026-01-05 04:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:38.887690 | orchestrator | 2026-01-05 04:26:38 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:38.890361 | orchestrator | 2026-01-05 04:26:38 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:38.890445 | orchestrator | 2026-01-05 04:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:41.939250 | orchestrator | 2026-01-05 04:26:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:41.942077 | orchestrator | 2026-01-05 04:26:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:41.942147 | orchestrator | 2026-01-05 04:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:44.994395 | orchestrator | 2026-01-05 04:26:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:44.995730 | orchestrator | 2026-01-05 04:26:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:44.995762 | orchestrator | 2026-01-05 04:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:48.053457 | orchestrator | 2026-01-05 04:26:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:48.055040 | orchestrator | 2026-01-05 04:26:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:48.055158 | orchestrator | 2026-01-05 04:26:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:51.106091 | orchestrator | 2026-01-05 04:26:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:51.107187 | orchestrator | 2026-01-05 04:26:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:51.107976 | orchestrator | 2026-01-05 04:26:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:54.153631 | orchestrator | 2026-01-05 04:26:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:54.155488 | orchestrator | 2026-01-05 04:26:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:54.155535 | orchestrator | 2026-01-05 04:26:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:57.208948 | orchestrator | 2026-01-05 04:26:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:26:57.209887 | orchestrator | 2026-01-05 04:26:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:26:57.209931 | orchestrator | 2026-01-05 04:26:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:00.261705 | orchestrator | 2026-01-05 04:27:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:00.263198 | orchestrator | 2026-01-05 04:27:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:00.263248 | orchestrator | 2026-01-05 04:27:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:03.305841 | orchestrator | 2026-01-05 04:27:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:03.307284 | orchestrator | 2026-01-05 04:27:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:03.307342 | orchestrator | 2026-01-05 04:27:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:06.358957 | orchestrator | 2026-01-05 04:27:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:06.360302 | orchestrator | 2026-01-05 04:27:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:06.360449 | orchestrator | 2026-01-05 04:27:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:09.408996 | orchestrator | 2026-01-05 04:27:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:09.410560 | orchestrator | 2026-01-05 04:27:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:09.410647 | orchestrator | 2026-01-05 04:27:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:12.453320 | orchestrator | 2026-01-05 04:27:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:12.453734 | orchestrator | 2026-01-05 04:27:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:12.454598 | orchestrator | 2026-01-05 04:27:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:15.508052 | orchestrator | 2026-01-05 04:27:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:15.510243 | orchestrator | 2026-01-05 04:27:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:15.510395 | orchestrator | 2026-01-05 04:27:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:18.566282 | orchestrator | 2026-01-05 04:27:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:18.568011 | orchestrator | 2026-01-05 04:27:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:18.568296 | orchestrator | 2026-01-05 04:27:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:21.621481 | orchestrator | 2026-01-05 04:27:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:21.622727 | orchestrator | 2026-01-05 04:27:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:21.622785 | orchestrator | 2026-01-05 04:27:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:24.683374 | orchestrator | 2026-01-05 04:27:24 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:24.685459 | orchestrator | 2026-01-05 04:27:24 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:24.685498 | orchestrator | 2026-01-05 04:27:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:27.739165 | orchestrator | 2026-01-05 04:27:27 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:27.740762 | orchestrator | 2026-01-05 04:27:27 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:27.740944 | orchestrator | 2026-01-05 04:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:30.795311 | orchestrator | 2026-01-05 04:27:30 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:30.797923 | orchestrator | 2026-01-05 04:27:30 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:30.798130 | orchestrator | 2026-01-05 04:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:33.840582 | orchestrator | 2026-01-05 04:27:33 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:33.841223 | orchestrator | 2026-01-05 04:27:33 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:33.841274 | orchestrator | 2026-01-05 04:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:36.882008 | orchestrator | 2026-01-05 04:27:36 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:36.882607 | orchestrator | 2026-01-05 04:27:36 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:36.882624 | orchestrator | 2026-01-05 04:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:39.924280 | orchestrator | 2026-01-05 04:27:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:39.925859 | orchestrator | 2026-01-05 04:27:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:39.925943 | orchestrator | 2026-01-05 04:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:42.971661 | orchestrator | 2026-01-05 04:27:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:42.972923 | orchestrator | 2026-01-05 04:27:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:42.972963 | orchestrator | 2026-01-05 04:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:46.028682 | orchestrator | 2026-01-05 04:27:46 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:46.031093 | orchestrator | 2026-01-05 04:27:46 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:46.031188 | orchestrator | 2026-01-05 04:27:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:49.079384 | orchestrator | 2026-01-05 04:27:49 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:49.079462 | orchestrator | 2026-01-05 04:27:49 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:49.079500 | orchestrator | 2026-01-05 04:27:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:52.120863 | orchestrator | 2026-01-05 04:27:52 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:52.122460 | orchestrator | 2026-01-05 04:27:52 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:52.122525 | orchestrator | 2026-01-05 04:27:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:55.177300 | orchestrator | 2026-01-05 04:27:55 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:55.177899 | orchestrator | 2026-01-05 04:27:55 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:55.178080 | orchestrator | 2026-01-05 04:27:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:58.229490 | orchestrator | 2026-01-05 04:27:58 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:27:58.231762 | orchestrator | 2026-01-05 04:27:58 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:27:58.231842 | orchestrator | 2026-01-05 04:27:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:01.300532 | orchestrator | 2026-01-05 04:28:01 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:01.304174 | orchestrator | 2026-01-05 04:28:01 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:01.304271 | orchestrator | 2026-01-05 04:28:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:04.347166 | orchestrator | 2026-01-05 04:28:04 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:04.347965 | orchestrator | 2026-01-05 04:28:04 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:04.348090 | orchestrator | 2026-01-05 04:28:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:07.396870 | orchestrator | 2026-01-05 04:28:07 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:07.399014 | orchestrator | 2026-01-05 04:28:07 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:07.399065 | orchestrator | 2026-01-05 04:28:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:10.455827 | orchestrator | 2026-01-05 04:28:10 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:10.458645 | orchestrator | 2026-01-05 04:28:10 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:10.458685 | orchestrator | 2026-01-05 04:28:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:13.519710 | orchestrator | 2026-01-05 04:28:13 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:13.522995 | orchestrator | 2026-01-05 04:28:13 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:13.523136 | orchestrator | 2026-01-05 04:28:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:16.578420 | orchestrator | 2026-01-05 04:28:16 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:16.580630 | orchestrator | 2026-01-05 04:28:16 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:16.580679 | orchestrator | 2026-01-05 04:28:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:19.632435 | orchestrator | 2026-01-05 04:28:19 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:19.634105 | orchestrator | 2026-01-05 04:28:19 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:19.634143 | orchestrator | 2026-01-05 04:28:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:22.689403 | orchestrator | 2026-01-05 04:28:22 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:22.690532 | orchestrator | 2026-01-05 04:28:22 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:22.690686 | orchestrator | 2026-01-05 04:28:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:25.745442 | orchestrator | 2026-01-05 04:28:25 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:25.746853 | orchestrator | 2026-01-05 04:28:25 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:25.747392 | orchestrator | 2026-01-05 04:28:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:28.805005 | orchestrator | 2026-01-05 04:28:28 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:28.806842 | orchestrator | 2026-01-05 04:28:28 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:28.806938 | orchestrator | 2026-01-05 04:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:31.864156 | orchestrator | 2026-01-05 04:28:31 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:31.867046 | orchestrator | 2026-01-05 04:28:31 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:31.867116 | orchestrator | 2026-01-05 04:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:34.924502 | orchestrator | 2026-01-05 04:28:34 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:34.926104 | orchestrator | 2026-01-05 04:28:34 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:34.926185 | orchestrator | 2026-01-05 04:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:37.987121 | orchestrator | 2026-01-05 04:28:37 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:37.988439 | orchestrator | 2026-01-05 04:28:37 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:37.988537 | orchestrator | 2026-01-05 04:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:41.039684 | orchestrator | 2026-01-05 04:28:41 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:41.041066 | orchestrator | 2026-01-05 04:28:41 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:41.041131 | orchestrator | 2026-01-05 04:28:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:44.090137 | orchestrator | 2026-01-05 04:28:44 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:44.091187 | orchestrator | 2026-01-05 04:28:44 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:44.091222 | orchestrator | 2026-01-05 04:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:47.141633 | orchestrator | 2026-01-05 04:28:47 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:47.143987 | orchestrator | 2026-01-05 04:28:47 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:47.144024 | orchestrator | 2026-01-05 04:28:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:50.185031 | orchestrator | 2026-01-05 04:28:50 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:50.186836 | orchestrator | 2026-01-05 04:28:50 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:50.187088 | orchestrator | 2026-01-05 04:28:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:53.238227 | orchestrator | 2026-01-05 04:28:53 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:53.239187 | orchestrator | 2026-01-05 04:28:53 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:53.239208 | orchestrator | 2026-01-05 04:28:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:56.288140 | orchestrator | 2026-01-05 04:28:56 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:56.292570 | orchestrator | 2026-01-05 04:28:56 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:56.292649 | orchestrator | 2026-01-05 04:28:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:59.341339 | orchestrator | 2026-01-05 04:28:59 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:28:59.342940 | orchestrator | 2026-01-05 04:28:59 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:28:59.342993 | orchestrator | 2026-01-05 04:28:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:02.390497 | orchestrator | 2026-01-05 04:29:02 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:02.392513 | orchestrator | 2026-01-05 04:29:02 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:02.392712 | orchestrator | 2026-01-05 04:29:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:05.440227 | orchestrator | 2026-01-05 04:29:05 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:05.441689 | orchestrator | 2026-01-05 04:29:05 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:05.441943 | orchestrator | 2026-01-05 04:29:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:08.486728 | orchestrator | 2026-01-05 04:29:08 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:08.488963 | orchestrator | 2026-01-05 04:29:08 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:08.489150 | orchestrator | 2026-01-05 04:29:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:11.533761 | orchestrator | 2026-01-05 04:29:11 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:11.535459 | orchestrator | 2026-01-05 04:29:11 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:11.535521 | orchestrator | 2026-01-05 04:29:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:14.592460 | orchestrator | 2026-01-05 04:29:14 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:14.594741 | orchestrator | 2026-01-05 04:29:14 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:14.594806 | orchestrator | 2026-01-05 04:29:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:17.643944 | orchestrator | 2026-01-05 04:29:17 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:17.645444 | orchestrator | 2026-01-05 04:29:17 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:17.645501 | orchestrator | 2026-01-05 04:29:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:20.696349 | orchestrator | 2026-01-05 04:29:20 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:20.700456 | orchestrator | 2026-01-05 04:29:20 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:20.700528 | orchestrator | 2026-01-05 04:29:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:23.740230 | orchestrator | 2026-01-05 04:29:23 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:23.740455 | orchestrator | 2026-01-05 04:29:23 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:23.740476 | orchestrator | 2026-01-05 04:29:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:26.788254 | orchestrator | 2026-01-05 04:29:26 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:26.789989 | orchestrator | 2026-01-05 04:29:26 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:26.790154 | orchestrator | 2026-01-05 04:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:29.842883 | orchestrator | 2026-01-05 04:29:29 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:29.846276 | orchestrator | 2026-01-05 04:29:29 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:29.846411 | orchestrator | 2026-01-05 04:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:32.900397 | orchestrator | 2026-01-05 04:29:32 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:32.902608 | orchestrator | 2026-01-05 04:29:32 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:32.902673 | orchestrator | 2026-01-05 04:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:35.956272 | orchestrator | 2026-01-05 04:29:35 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:35.959349 | orchestrator | 2026-01-05 04:29:35 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:35.959440 | orchestrator | 2026-01-05 04:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:39.012319 | orchestrator | 2026-01-05 04:29:39 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:39.014639 | orchestrator | 2026-01-05 04:29:39 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:39.014763 | orchestrator | 2026-01-05 04:29:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:42.065391 | orchestrator | 2026-01-05 04:29:42 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:42.067777 | orchestrator | 2026-01-05 04:29:42 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:42.067981 | orchestrator | 2026-01-05 04:29:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:45.105359 | orchestrator | 2026-01-05 04:29:45 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:45.107009 | orchestrator | 2026-01-05 04:29:45 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:45.107054 | orchestrator | 2026-01-05 04:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:48.156441 | orchestrator | 2026-01-05 04:29:48 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:48.158266 | orchestrator | 2026-01-05 04:29:48 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:48.158328 | orchestrator | 2026-01-05 04:29:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:51.211063 | orchestrator | 2026-01-05 04:29:51 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:51.212420 | orchestrator | 2026-01-05 04:29:51 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:51.212455 | orchestrator | 2026-01-05 04:29:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:54.267052 | orchestrator | 2026-01-05 04:29:54 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:54.269263 | orchestrator | 2026-01-05 04:29:54 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:54.269312 | orchestrator | 2026-01-05 04:29:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:57.318228 | orchestrator | 2026-01-05 04:29:57 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:29:57.320966 | orchestrator | 2026-01-05 04:29:57 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:29:57.321300 | orchestrator | 2026-01-05 04:29:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:00.372611 | orchestrator | 2026-01-05 04:30:00 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:30:00.376659 | orchestrator | 2026-01-05 04:30:00 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:30:00.376839 | orchestrator | 2026-01-05 04:30:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:03.419092 | orchestrator | 2026-01-05 04:30:03 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:30:03.420718 | orchestrator | 2026-01-05 04:30:03 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:30:03.420793 | orchestrator | 2026-01-05 04:30:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:06.471810 | orchestrator | 2026-01-05 04:30:06 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:30:06.472902 | orchestrator | 2026-01-05 04:30:06 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:30:06.473009 | orchestrator | 2026-01-05 04:30:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:09.526278 | orchestrator | 2026-01-05 04:30:09 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:30:09.528247 | orchestrator | 2026-01-05 04:30:09 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:30:09.528646 | orchestrator | 2026-01-05 04:30:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:12.585388 | orchestrator | 2026-01-05 04:30:12 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:30:12.587434 | orchestrator | 2026-01-05 04:30:12 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:30:12.587474 | orchestrator | 2026-01-05 04:30:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:15.637182 | orchestrator | 2026-01-05 04:30:15 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:30:15.638491 | orchestrator | 2026-01-05 04:30:15 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:30:15.638530 | orchestrator | 2026-01-05 04:30:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:18.689469 | orchestrator | 2026-01-05 04:30:18 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:30:18.692484 | orchestrator | 2026-01-05 04:30:18 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:30:18.692549 | orchestrator | 2026-01-05 04:30:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:21.742360 | orchestrator | 2026-01-05 04:30:21 | INFO  | Task afe8ab2b-12c8-47a5-a936-080dda967fc3 is in state STARTED 2026-01-05 04:30:21.742571 | orchestrator | 2026-01-05 04:30:21 | INFO  | Task 861ec4e0-4387-4901-b7ab-9d4f13823dbe is in state STARTED 2026-01-05 04:30:21.742599 | orchestrator | 2026-01-05 04:30:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:23.354515 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-05 04:30:23.356203 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-05 04:30:24.166242 | 2026-01-05 04:30:24.166522 | PLAY [Post output play] 2026-01-05 04:30:24.185315 | 2026-01-05 04:30:24.185482 | LOOP [stage-output : Register sources] 2026-01-05 04:30:24.240112 | 2026-01-05 04:30:24.240369 | TASK [stage-output : Check sudo] 2026-01-05 04:30:25.139291 | orchestrator | sudo: a password is required 2026-01-05 04:30:25.277530 | orchestrator | ok: Runtime: 0:00:00.019714 2026-01-05 04:30:25.292007 | 2026-01-05 04:30:25.292153 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-05 04:30:25.329424 | 2026-01-05 04:30:25.329700 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-05 04:30:25.407906 | orchestrator | ok 2026-01-05 04:30:25.420280 | 2026-01-05 04:30:25.420634 | LOOP [stage-output : Ensure target folders exist] 2026-01-05 04:30:25.973144 | orchestrator | ok: "docs" 2026-01-05 04:30:25.973492 | 2026-01-05 04:30:26.216872 | orchestrator | ok: "artifacts" 2026-01-05 04:30:26.482123 | orchestrator | ok: "logs" 2026-01-05 04:30:26.500042 | 2026-01-05 04:30:26.500193 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-05 04:30:26.531162 | 2026-01-05 04:30:26.531371 | TASK [stage-output : Make all log files readable] 2026-01-05 04:30:26.850655 | orchestrator | ok 2026-01-05 04:30:26.857890 | 2026-01-05 04:30:26.858017 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-05 04:30:26.894017 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:26.908118 | 2026-01-05 04:30:26.908281 | TASK [stage-output : Discover log files for compression] 2026-01-05 04:30:26.924522 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:26.939987 | 2026-01-05 04:30:26.940164 | LOOP [stage-output : Archive everything from logs] 2026-01-05 04:30:26.985770 | 2026-01-05 04:30:26.986018 | PLAY [Post cleanup play] 2026-01-05 04:30:26.997187 | 2026-01-05 04:30:26.997315 | TASK [Set cloud fact (Zuul deployment)] 2026-01-05 04:30:27.063412 | orchestrator | ok 2026-01-05 04:30:27.072885 | 2026-01-05 04:30:27.073026 | TASK [Set cloud fact (local deployment)] 2026-01-05 04:30:27.107118 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:27.120391 | 2026-01-05 04:30:27.120631 | TASK [Clean the cloud environment] 2026-01-05 04:30:27.791928 | orchestrator | 2026-01-05 04:30:27 - clean up servers 2026-01-05 04:30:28.813801 | orchestrator | 2026-01-05 04:30:28 - testbed-manager 2026-01-05 04:30:28.898231 | orchestrator | 2026-01-05 04:30:28 - testbed-node-1 2026-01-05 04:30:28.992981 | orchestrator | 2026-01-05 04:30:28 - testbed-node-0 2026-01-05 04:30:29.089964 | orchestrator | 2026-01-05 04:30:29 - testbed-node-4 2026-01-05 04:30:29.178789 | orchestrator | 2026-01-05 04:30:29 - testbed-node-2 2026-01-05 04:30:29.263883 | orchestrator | 2026-01-05 04:30:29 - testbed-node-3 2026-01-05 04:30:29.357046 | orchestrator | 2026-01-05 04:30:29 - testbed-node-5 2026-01-05 04:30:29.450411 | orchestrator | 2026-01-05 04:30:29 - clean up keypairs 2026-01-05 04:30:29.473020 | orchestrator | 2026-01-05 04:30:29 - testbed 2026-01-05 04:30:29.497808 | orchestrator | 2026-01-05 04:30:29 - wait for servers to be gone 2026-01-05 04:30:40.353288 | orchestrator | 2026-01-05 04:30:40 - clean up ports 2026-01-05 04:30:40.532419 | orchestrator | 2026-01-05 04:30:40 - 12e8ff8a-0666-4168-8d84-402ef8ccae4b 2026-01-05 04:30:40.982874 | orchestrator | 2026-01-05 04:30:40 - 290b43c1-2c78-4238-bbc0-aa2100d27fb1 2026-01-05 04:30:41.289417 | orchestrator | 2026-01-05 04:30:41 - 647c05fb-dc7f-46e1-b4e5-d1173ba134d6 2026-01-05 04:30:41.537678 | orchestrator | 2026-01-05 04:30:41 - 658a1017-e324-4164-b7cc-5f9232a381fd 2026-01-05 04:30:41.774194 | orchestrator | 2026-01-05 04:30:41 - 7bf0abae-4308-4f0c-9d87-b7062be11317 2026-01-05 04:30:41.987017 | orchestrator | 2026-01-05 04:30:41 - acf82194-f986-44c0-a6db-8ac5bb9b6749 2026-01-05 04:30:42.219102 | orchestrator | 2026-01-05 04:30:42 - d78f08ff-60c1-4dcc-b118-8be55ee82192 2026-01-05 04:30:42.428254 | orchestrator | 2026-01-05 04:30:42 - clean up volumes 2026-01-05 04:30:42.604743 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-2-node-base 2026-01-05 04:30:42.648094 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-3-node-base 2026-01-05 04:30:42.696101 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-4-node-base 2026-01-05 04:30:42.740050 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-0-node-base 2026-01-05 04:30:42.785448 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-1-node-base 2026-01-05 04:30:42.838221 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-5-node-base 2026-01-05 04:30:42.889828 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-manager-base 2026-01-05 04:30:42.933799 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-7-node-4 2026-01-05 04:30:42.979260 | orchestrator | 2026-01-05 04:30:42 - testbed-volume-0-node-3 2026-01-05 04:30:43.028709 | orchestrator | 2026-01-05 04:30:43 - testbed-volume-4-node-4 2026-01-05 04:30:43.077735 | orchestrator | 2026-01-05 04:30:43 - testbed-volume-1-node-4 2026-01-05 04:30:43.135362 | orchestrator | 2026-01-05 04:30:43 - testbed-volume-2-node-5 2026-01-05 04:30:43.178808 | orchestrator | 2026-01-05 04:30:43 - testbed-volume-5-node-5 2026-01-05 04:30:43.223890 | orchestrator | 2026-01-05 04:30:43 - testbed-volume-8-node-5 2026-01-05 04:30:43.270111 | orchestrator | 2026-01-05 04:30:43 - testbed-volume-6-node-3 2026-01-05 04:30:43.316921 | orchestrator | 2026-01-05 04:30:43 - testbed-volume-3-node-3 2026-01-05 04:30:43.359232 | orchestrator | 2026-01-05 04:30:43 - disconnect routers 2026-01-05 04:30:44.051058 | orchestrator | 2026-01-05 04:30:44 - testbed 2026-01-05 04:30:45.022694 | orchestrator | 2026-01-05 04:30:45 - clean up subnets 2026-01-05 04:30:45.094902 | orchestrator | 2026-01-05 04:30:45 - subnet-testbed-management 2026-01-05 04:30:45.282761 | orchestrator | 2026-01-05 04:30:45 - clean up networks 2026-01-05 04:30:45.474837 | orchestrator | 2026-01-05 04:30:45 - net-testbed-management 2026-01-05 04:30:45.864003 | orchestrator | 2026-01-05 04:30:45 - clean up security groups 2026-01-05 04:30:45.917022 | orchestrator | 2026-01-05 04:30:45 - testbed-management 2026-01-05 04:30:46.050447 | orchestrator | 2026-01-05 04:30:46 - testbed-node 2026-01-05 04:30:46.182288 | orchestrator | 2026-01-05 04:30:46 - clean up floating ips 2026-01-05 04:30:46.226195 | orchestrator | 2026-01-05 04:30:46 - 81.163.193.38 2026-01-05 04:30:46.593659 | orchestrator | 2026-01-05 04:30:46 - clean up routers 2026-01-05 04:30:46.703728 | orchestrator | 2026-01-05 04:30:46 - testbed 2026-01-05 04:30:47.676117 | orchestrator | ok: Runtime: 0:00:20.136012 2026-01-05 04:30:47.680846 | 2026-01-05 04:30:47.681005 | PLAY RECAP 2026-01-05 04:30:47.681120 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-05 04:30:47.681176 | 2026-01-05 04:30:47.848904 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-05 04:30:47.857926 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-05 04:30:48.734714 | 2026-01-05 04:30:48.734949 | PLAY [Cleanup play] 2026-01-05 04:30:48.753123 | 2026-01-05 04:30:48.753287 | TASK [Set cloud fact (Zuul deployment)] 2026-01-05 04:30:48.815453 | orchestrator | ok 2026-01-05 04:30:48.823069 | 2026-01-05 04:30:48.823247 | TASK [Set cloud fact (local deployment)] 2026-01-05 04:30:48.868882 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:48.880320 | 2026-01-05 04:30:48.880474 | TASK [Clean the cloud environment] 2026-01-05 04:30:50.130786 | orchestrator | 2026-01-05 04:30:50 - clean up servers 2026-01-05 04:30:50.771850 | orchestrator | 2026-01-05 04:30:50 - clean up keypairs 2026-01-05 04:30:50.790273 | orchestrator | 2026-01-05 04:30:50 - wait for servers to be gone 2026-01-05 04:30:50.831646 | orchestrator | 2026-01-05 04:30:50 - clean up ports 2026-01-05 04:30:50.927798 | orchestrator | 2026-01-05 04:30:50 - clean up volumes 2026-01-05 04:30:50.992248 | orchestrator | 2026-01-05 04:30:50 - disconnect routers 2026-01-05 04:30:51.023693 | orchestrator | 2026-01-05 04:30:51 - clean up subnets 2026-01-05 04:30:51.058673 | orchestrator | 2026-01-05 04:30:51 - clean up networks 2026-01-05 04:30:51.236336 | orchestrator | 2026-01-05 04:30:51 - clean up security groups 2026-01-05 04:30:51.281355 | orchestrator | 2026-01-05 04:30:51 - clean up floating ips 2026-01-05 04:30:51.306459 | orchestrator | 2026-01-05 04:30:51 - clean up routers 2026-01-05 04:30:51.920075 | orchestrator | ok: Runtime: 0:00:01.614278 2026-01-05 04:30:51.922445 | 2026-01-05 04:30:51.922567 | PLAY RECAP 2026-01-05 04:30:51.922631 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-05 04:30:51.922663 | 2026-01-05 04:30:52.085526 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-05 04:30:52.086815 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-05 04:30:52.903456 | 2026-01-05 04:30:52.903717 | PLAY [Base post-fetch] 2026-01-05 04:30:52.920520 | 2026-01-05 04:30:52.920698 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-05 04:30:52.968474 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:52.978623 | 2026-01-05 04:30:52.978817 | TASK [fetch-output : Set log path for single node] 2026-01-05 04:30:53.019915 | orchestrator | ok 2026-01-05 04:30:53.027339 | 2026-01-05 04:30:53.027487 | LOOP [fetch-output : Ensure local output dirs] 2026-01-05 04:30:53.547041 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/20c86909422e4296b16c8875f695e972/work/logs" 2026-01-05 04:30:53.836115 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/20c86909422e4296b16c8875f695e972/work/artifacts" 2026-01-05 04:30:54.142917 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/20c86909422e4296b16c8875f695e972/work/docs" 2026-01-05 04:30:54.154696 | 2026-01-05 04:30:54.154872 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-05 04:30:55.057990 | orchestrator | changed: .d..t...... ./ 2026-01-05 04:30:55.058359 | orchestrator | changed: All items complete 2026-01-05 04:30:55.058414 | 2026-01-05 04:30:55.766160 | orchestrator | changed: .d..t...... ./ 2026-01-05 04:30:56.530496 | orchestrator | changed: .d..t...... ./ 2026-01-05 04:30:56.561071 | 2026-01-05 04:30:56.561217 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-05 04:30:56.594498 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:56.596913 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:56.620271 | 2026-01-05 04:30:56.620390 | PLAY RECAP 2026-01-05 04:30:56.620457 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-05 04:30:56.620492 | 2026-01-05 04:30:56.764671 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-05 04:30:56.767194 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-05 04:30:57.575360 | 2026-01-05 04:30:57.575565 | PLAY [Base post] 2026-01-05 04:30:57.591756 | 2026-01-05 04:30:57.591942 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-05 04:30:58.833010 | orchestrator | changed 2026-01-05 04:30:58.845450 | 2026-01-05 04:30:58.845628 | PLAY RECAP 2026-01-05 04:30:58.845722 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-05 04:30:58.845811 | 2026-01-05 04:30:59.033286 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-05 04:30:59.035163 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-05 04:30:59.865185 | 2026-01-05 04:30:59.865371 | PLAY [Base post-logs] 2026-01-05 04:30:59.876569 | 2026-01-05 04:30:59.876731 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-05 04:31:00.400738 | localhost | changed 2026-01-05 04:31:00.418766 | 2026-01-05 04:31:00.419344 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-05 04:31:00.461536 | localhost | ok 2026-01-05 04:31:00.471144 | 2026-01-05 04:31:00.471304 | TASK [Set zuul-log-path fact] 2026-01-05 04:31:00.487937 | localhost | ok 2026-01-05 04:31:00.499243 | 2026-01-05 04:31:00.499420 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-05 04:31:00.527742 | localhost | ok 2026-01-05 04:31:00.532728 | 2026-01-05 04:31:00.532909 | TASK [upload-logs : Create log directories] 2026-01-05 04:31:01.032724 | localhost | changed 2026-01-05 04:31:01.036525 | 2026-01-05 04:31:01.036687 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-05 04:31:01.591535 | localhost -> localhost | ok: Runtime: 0:00:00.006366 2026-01-05 04:31:01.601141 | 2026-01-05 04:31:01.601344 | TASK [upload-logs : Upload logs to log server] 2026-01-05 04:31:02.184550 | localhost | Output suppressed because no_log was given 2026-01-05 04:31:02.186599 | 2026-01-05 04:31:02.186723 | LOOP [upload-logs : Compress console log and json output] 2026-01-05 04:31:02.235904 | localhost | skipping: Conditional result was False 2026-01-05 04:31:02.242722 | localhost | skipping: Conditional result was False 2026-01-05 04:31:02.255275 | 2026-01-05 04:31:02.255491 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-05 04:31:02.309513 | localhost | skipping: Conditional result was False 2026-01-05 04:31:02.310121 | 2026-01-05 04:31:02.312351 | localhost | skipping: Conditional result was False 2026-01-05 04:31:02.318158 | 2026-01-05 04:31:02.318333 | LOOP [upload-logs : Upload console log and json output]